<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mohamed Abdallah</title>
    <description>The latest articles on Forem by Mohamed Abdallah (@mohamedabdallah14).</description>
    <link>https://forem.com/mohamedabdallah14</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mohamedabdallah14"/>
    <language>en</language>
    <item>
      <title>Claude rewrote my resume and I couldn’t send it, so I built unslop.</title>
      <dc:creator>Mohamed Abdallah</dc:creator>
      <pubDate>Thu, 30 Apr 2026 00:27:51 +0000</pubDate>
      <link>https://forem.com/mohamedabdallah14/claude-rewrote-my-resume-and-i-couldnt-send-it-so-i-built-unslop-2hhd</link>
      <guid>https://forem.com/mohamedabdallah14/claude-rewrote-my-resume-and-i-couldnt-send-it-so-i-built-unslop-2hhd</guid>
      <description>&lt;h1&gt;
  
  
  I built unslop because Claude rewrote my resume and I couldn't send it
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2cpbmzes46kxh0xd4ro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2cpbmzes46kxh0xd4ro.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was updating my CV. New experience, new role, the usual. I'd written it myself a few years back. I asked Claude to update the wording with my latest projects.&lt;/p&gt;

&lt;p&gt;I read the output. It was AI.&lt;/p&gt;

&lt;p&gt;You know the feeling. You read a paragraph, and the first thing your brain says is &lt;em&gt;this is written with AI&lt;/em&gt;. It doesn't matter how careful the person was. It doesn't matter that they reviewed every line. The moment the reader thinks "AI", the work gets discounted before they finish the sentence. The effort doesn't transfer.&lt;/p&gt;

&lt;p&gt;I'm a developer. I'm not the best at writing in English. I never was. Email, PR comments, Jira tickets — I lean on AI to help me get the wording right because prose isn't the part of the job I'm good at. That was the deal AI was supposed to give me. Help the engineers who can't write.&lt;/p&gt;

&lt;p&gt;The deal is broken. Because the output reads as AI. And the moment your reader thinks "AI", everything you wrote loses weight.&lt;/p&gt;

&lt;p&gt;There’s a paper on this. Liang et al., 2023 (arXiv:2304.02819). They ran AI detectors against essays written by non-native English speakers and against essays written by US 8th-graders. The detectors consistently flagged the non-native writing as AI. The native writing went through clean. So the tool meant to level the field for ESL writers is actively penalizing them. AI for everyone, except the people who need it most.&lt;/p&gt;

&lt;p&gt;That's the detector side. But the same fingerprint that triggers a detector triggers a human reader. We've all gotten good at this. Em dashes everywhere. "It's worth noting that." "Delve." Sycophancy openers. Hedging stacks. Tricolons stacked three deep. You see one or two of them and the brain switches modes.&lt;/p&gt;

&lt;p&gt;So I took it on myself. Not pay $9.99 a month for some "AI humanizer" SaaS site. Build something open-source that runs in my terminal, in my editor, inside my Claude Code session. And actually have it work.&lt;/p&gt;

&lt;p&gt;I called it &lt;strong&gt;unslop&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;unslop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repo: &lt;a href="https://github.com/MohamedAbdallah-14/unslop" rel="noopener noreferrer"&gt;github.com/MohamedAbdallah-14/unslop&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What it actually does
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flczwwb1uarz089c94etg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flczwwb1uarz089c94etg.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  I read 38 papers to build this
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9uvzirp1egvp1e6st9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9uvzirp1egvp1e6st9i.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full reading list is in &lt;a href="https://github.com/MohamedAbdallah-14/unslop/blob/main/docs/RESEARCH_AND_TECH.md" rel="noopener noreferrer"&gt;&lt;code&gt;docs/RESEARCH_AND_TECH.md&lt;/code&gt;&lt;/a&gt;. Five things I learned that ended up in the code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Warmth and reliability trade off. Ibrahim et al. (arXiv:2507.21919, 2025) trained models to be warmer and more empathetic. The result: error rates went up by 10–30 percentage points across safety-critical tasks. Warm-trained models were significantly more likely to validate the user’s incorrect beliefs, especially when the user sounded sad. So the “be friendlier” finetuning isn’t neutral. It’s making the output less reliable. unslop subtracts. It never adds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sycophancy is the loudest tell. SycEval (arXiv:2502.08177) measured sycophantic agreement in 58.19% of cases across ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro. Gemini was worst at 62.47%, ChatGPT best at 56.71%. None of them under 50%. The first thing readers notice. The first thing unslop strips.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Detectors read variance, not vocabulary. DivEye (TMLR 2026) shows modern detectors look at intra-document surprisal variance. Even if you swap synonyms, the variance fingerprint persists. So the rewrite has to engineer burstiness — mix sentence lengths across the paragraph — not just swap words.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verbal uncertainty beats numeric confidence. Tao et al. (arXiv:2505.23854, 2025) found that linguistic verbal uncertainty (“I think”, “probably”, “seems”) consistently outperforms token-probability and numeric-confidence methods on both calibration and discrimination. So unslop preserves real uncertainty in human form, not as confidence intervals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Voice imitation is harder than it looks. EMNLP 2025 Findings (arXiv:2509.14543) tested LLMs on imitating personal writing styles. They can approximate structured formats like news and email. They struggle on the nuanced, informal voice you’d find in blogs or forums. unslop’s voice-match mode is honest about being a best-effort approximation, not a clone.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are 33 more papers and they all shaped a specific decision in the code. The point isn't to flex the bibliography. The point is that the standard "make AI sound human" advice you find online — add emoji, add warmth, add filler — is the opposite of what the research says works.&lt;/p&gt;

&lt;p&gt;You subtract. You don't add.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;p&gt;If you're a developer who uses LLMs for emails, tickets, PR descriptions, comments — and you've felt that twinge when the output reads as AI — this is for you.&lt;/p&gt;

&lt;p&gt;If you're an ESL writer who's tired of having your writing flagged as AI by detectors that can't tell the difference between you and a chatbot.&lt;/p&gt;

&lt;p&gt;If you ship a lot of prose with Claude Code and you don't want every response to start with "Great question!"&lt;/p&gt;

&lt;p&gt;If you want to humanize your output without paying $9.99/month to a SaaS site that's a worse wrapper around the same models you're already paying for.&lt;/p&gt;




&lt;h2&gt;
  
  
  The honest close
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qczi74fdbokz50trs77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qczi74fdbokz50trs77.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's 2 AM as I'm finishing this post. I'm a solo developer with no following. There's a real chance this dies on the graveyard of GitHub. There's another chance it takes years to be known.&lt;/p&gt;

&lt;p&gt;I checked the repo this morning. Three people starred it yesterday. That made my week.&lt;/p&gt;

&lt;p&gt;That made my week.&lt;/p&gt;

&lt;p&gt;If you read this far and any part of it landed, here's the ask:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;unslop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Star the repo: &lt;a href="https://github.com/MohamedAbdallah-14/unslop" rel="noopener noreferrer"&gt;github.com/MohamedAbdallah-14/unslop&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If it helps you, tell one person.&lt;/p&gt;

&lt;p&gt;That's the whole pitch.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>writing</category>
      <category>programming</category>
    </item>
    <item>
      <title>When "no AI in the calculation" is a feature, not a bug</title>
      <dc:creator>Mohamed Abdallah</dc:creator>
      <pubDate>Sun, 26 Apr 2026 22:56:39 +0000</pubDate>
      <link>https://forem.com/mohamedabdallah14/when-no-ai-in-the-calculation-is-a-feature-not-a-bug-4dm2</link>
      <guid>https://forem.com/mohamedabdallah14/when-no-ai-in-the-calculation-is-a-feature-not-a-bug-4dm2</guid>
      <description>&lt;p&gt;I work on an estimation engine where the same input always produces the same output. In 2026 that's apparently a controversial design choice.&lt;/p&gt;

&lt;p&gt;Every other tool in the demo deck has a sparkle icon now. The investor slide that doesn't say "AI-powered" feels like it was printed in 2019 and forgotten. So when a B2B platform's pitch says, in plain text, "the calculation contains no AI" — people stop and ask if that's a typo.&lt;/p&gt;

&lt;p&gt;It isn't. It's the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing the engine does
&lt;/h2&gt;

&lt;p&gt;Let me describe the shape without describing the brand. It's a deterministic software-estimation platform I work on with a client. You feed it a structured project description — features, integrations, target platforms, team composition assumptions, a long list of normalized inputs. It returns a number: hours, cost, range. That number lands in a contract. A buyer signs it. A vendor delivers against it. If the estimate is wrong by 40%, somebody loses money.&lt;/p&gt;

&lt;p&gt;The engine that produces that number is a few thousand lines of TypeScript. Eighty-something modules. Rules, weights, lookups, modifiers. No model call anywhere in the calculation path. Same inputs in, same number out. Today, tomorrow, next March.&lt;/p&gt;

&lt;p&gt;When I describe this to other engineers in 2026, the second question is always: "Why not put an LLM in there? It would handle the edge cases."&lt;/p&gt;

&lt;p&gt;It would. That's the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust is not vibes
&lt;/h2&gt;

&lt;p&gt;A contract estimate is a number two parties stake real money on. The buyer wants to know what they're paying for. The vendor wants to know what they can deliver. Both want to know the number didn't move because somebody refreshed the page.&lt;/p&gt;

&lt;p&gt;If I run the same estimate twice and get 312 hours then 287 hours, I haven't estimated anything. I've sampled from a distribution and presented the sample as a fact. That's a category error. Estimation implies a method. A method implies repeatability. Without repeatability, "estimate" is just a confident-sounding guess in a nice UI.&lt;/p&gt;

&lt;p&gt;LLMs sample from distributions. That's their whole architecture. You can pin temperature to zero, but you're still at the mercy of model updates, context formatting, token boundaries, and the silent retraining the vendor pushes on a Tuesday. I've seen the same prompt return materially different answers across a model version bump. In a chatbot, that's a quirk. In a contract, it's malpractice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Audit is the killer requirement
&lt;/h2&gt;

&lt;p&gt;Estimates get re-litigated. Always. A project goes 30% over. The buyer wants to know why. The vendor wants to know if scope crept or if the original number was wrong. Somebody — usually a project manager who was not in the original conversation — has to reconstruct how the number got built.&lt;/p&gt;

&lt;p&gt;With a rule-based engine, that reconstruction is mechanical. You open the calculation log. You see which modules fired, which inputs they consumed, which weights applied. You point at the line that says "multi-tenancy modifier: +18%" and you have a conversation about whether that modifier was correct. The disagreement is bounded.&lt;/p&gt;

&lt;p&gt;With an LLM in the path, the audit trail is "the model decided." You can ask it to explain itself, and it will, in fluent English that may or may not reflect what actually drove the output. There's no causal trace from input to output that a human can verify. The model's "reasoning" is a post-hoc story it told itself to keep the conversation going.&lt;/p&gt;

&lt;p&gt;This is fine for "summarize this article." It is not fine for a number on a procurement contract. The deeper issue is liability. When the engine is deterministic, the vendor can defend the estimate by walking through the rules. When it isn't, the defense is "trust the AI" — which a lawyer will eat alive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproducibility, plainly
&lt;/h2&gt;

&lt;p&gt;I'll say it the blunt way. An estimator that gives different answers to the same input is not estimating. It's hallucinating with confidence. The fact that the hallucinations cluster around a plausible value most of the time makes it worse, not better, because it lets the failure mode hide.&lt;/p&gt;

&lt;p&gt;Determinism here isn't a nostalgia thing. It's a contract with the user that says: if you change the answer, you changed an input. If you didn't change an input, the answer is the same. That contract is what makes the tool a tool and not an oracle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI actually earns its keep
&lt;/h2&gt;

&lt;p&gt;I'm not anti-AI here. The MCP image-generation server I run in production sits one repo over from this argument. I'm not arguing for a museum.&lt;/p&gt;

&lt;p&gt;The argument is about &lt;em&gt;placement&lt;/em&gt;. There are three spots in this product where AI does real work and nobody objects:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input parsing.&lt;/strong&gt; Customers describe projects in prose. "We need a Flutter app with iOS and Android, two user roles, payment via Stripe, and a moderator dashboard." A model is great at turning that into the structured input the engine expects — feature flags, platform targets, role count, integrations. If it gets it wrong, the user sees the structured form and corrects it before hitting calculate. The AI's output is a draft for a human to confirm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Similar-project retrieval.&lt;/strong&gt; Given a finished estimate, an embedding lookup over past projects ("here are five engagements with similar shape and scope; here's how they actually came in") is genuinely useful. It contextualizes the deterministic number without replacing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation drafting.&lt;/strong&gt; The engine outputs the number and a structured rationale. A model turns that rationale into a paragraph the buyer can read. The number doesn't move. The prose around it does.&lt;/p&gt;

&lt;p&gt;Notice the pattern. AI sits at the &lt;em&gt;interface&lt;/em&gt; between human language and structured data, in both directions. It does not sit in the calculation. The calculation is where the math has to be defensible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rule, generalized
&lt;/h2&gt;

&lt;p&gt;Here's the rule I've started applying to every product decision involving AI:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI fits at the human-interface layer. AI breaks anywhere the output needs to be defensible later.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the output of a step is going to a human who will look at it and decide what to do — translation, summarization, drafting, suggestion, classification with a human-in-the-loop — AI is often the right tool. Slightly different output each time is fine, because the human is the error-correction layer.&lt;/p&gt;

&lt;p&gt;If the output of a step is going to feed into another deterministic process, or into a contract, or into a regulated decision, or into something a court or an auditor or a finance team will examine in twelve months — AI is the wrong tool. You need a function. Functions are repeatable. Models are not.&lt;/p&gt;

&lt;p&gt;Most products today are picking the wrong layer. They put a model in the calculation path because that's where the magic-feeling happens, then bolt deterministic guardrails around it to clean up the mess. The order is backwards. The deterministic core should be the product. The model should be the helpful assistant standing next to it, translating between humans and the core, never replacing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like as a positioning choice
&lt;/h2&gt;

&lt;p&gt;Saying "no AI in the calculation" out loud, in marketing copy, in 2026 — that's a position. It's a bet that the buyer who actually has to defend an estimate to their CFO values reproducibility more than they value a sparkle icon. So far that bet keeps paying off in customer conversations. The buyers who matter ask one question after the demo: "Will this give me the same answer next month?" When the answer is yes, the conversation gets serious.&lt;/p&gt;

&lt;p&gt;Determinism is the product when accountability is the use case. AI breaks accountability. Build the boring math first. Wrap it in the helpful interface second. Don't confuse the two.&lt;/p&gt;

&lt;p&gt;— Mohamed&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>ai</category>
      <category>typescript</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Routing 30+ image models with one MCP server</title>
      <dc:creator>Mohamed Abdallah</dc:creator>
      <pubDate>Fri, 24 Apr 2026 20:55:32 +0000</pubDate>
      <link>https://forem.com/mohamedabdallah14/routing-30-image-models-with-one-mcp-server-3glo</link>
      <guid>https://forem.com/mohamedabdallah14/routing-30-image-models-with-one-mcp-server-3glo</guid>
      <description>&lt;p&gt;Most image-model wrappers pick one model and call it. DALL-E, Imagen, Stable Diffusion, Flux — pick your favorite, ship an API. The trade-off is fixed: one model's strengths become your whole tool's strengths, and its weaknesses become yours too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MohamedAbdallah-14/prompt-to-asset" rel="noopener noreferrer"&gt;prompt-to-asset&lt;/a&gt; takes a different angle. It's an MCP server (Model Context Protocol — the open standard Claude and a growing list of clients use for tool integration) that routes a request to the right image model for the task, out of 30+. The routing decisions live in a JSON table; the hard question this post is about is how that table got built.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why routing at all
&lt;/h2&gt;

&lt;p&gt;Image models have wildly different strengths. A short list of specifics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text rendering.&lt;/strong&gt; Imagen and Flux Pro are decent. Stable Diffusion and most Midjourney-clones produce garbled letterforms. If your prompt involves in-image text, routing matters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparent backgrounds.&lt;/strong&gt; Only a subset of models produce clean alpha. The rest force you to matt after generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Style adherence.&lt;/strong&gt; For "flat vector editorial illustration, no 3D," some models comply on first try. Others need 3-4 regenerations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aspect ratios.&lt;/strong&gt; Models have preferred training resolutions. Requesting 2.4:1 when the model was trained at 1:1 produces low-quality output or silent crops.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost.&lt;/strong&gt; Free tiers (Pollinations, Stable Horde, HuggingFace Inference) work for drafts. Paid tiers (Imagen 3, Flux Pro, DALL-E 3) for finals. Picking the wrong tier for the use case wastes either money or quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Given all that, one model per tool is a compromise. Routing lets you respect the compromise instead of pretending it doesn't exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  The routing table
&lt;/h2&gt;

&lt;p&gt;The table lives in &lt;code&gt;data/routing-table.json&lt;/code&gt;. Each entry looks roughly like this (simplified):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"task"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"app_icon"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"aspect_ratio"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1:1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"constraints"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"no_text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"transparent_background_optional"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"preferred_models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"imagen-3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"paid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"flux-pro-1.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"paid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pollinations-flux"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"free"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"never"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dall-e-2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stable-diffusion-1.5"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"post_processing"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"center_crop"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alpha_matte_if_needed"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things about that structure that took me time to get right:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;"Never" is explicit.&lt;/strong&gt; Routing by best-fit is fine until a user requests "app icon" and the router picks a model that can't handle square aspect. Listing the models to exclude per task prevents the correct-on-average, wrong-on-edge-cases failure mode.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scores are 1-10, not best-only.&lt;/strong&gt; If the top-scored model is down or rate-limited, the router falls through. Fallback paths matter more than I initially thought.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-processing is part of the route.&lt;/strong&gt; "Generate a 1024x1024 icon, then center-crop, then alpha-matte" is a single logical route even though it involves multiple steps. Separating model selection from post-processing prematurely made the table harder to reason about.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How the table got built
&lt;/h2&gt;

&lt;p&gt;Honestly? Mostly by running the same ~30 prompts through each of the 30+ models and eyeballing the outputs. There's no automated scoring for "this looks like a good app icon." You see it or you don't.&lt;/p&gt;

&lt;p&gt;I tried LLM-based scoring for a while — generate an image, have GPT-4-vision rate it. It was 60% accurate and introduced its own biases (favored photorealism when the target was flat illustration, etc.). Faster than manual but noisier. I ended up with a hybrid: LLM-pre-screening to cut the top 3, then human pick.&lt;/p&gt;

&lt;p&gt;What I'd do differently if starting over: build the eval harness first, not the router. An eval harness is a script that takes a set of reference prompts and runs them through a model, spitting out a gallery of results. Even without automated scoring, having the gallery on disk for 30 models × 30 prompts = 900 images laid out in a grid makes routing decisions fast. Without the harness, I was regenerating the same images over and over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three execution modes
&lt;/h2&gt;

&lt;p&gt;The server exposes three ways to use it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;inline_svg&lt;/code&gt;&lt;/strong&gt; — the host LLM writes SVG directly. No external model call. For simple logos and wordmarks this is fastest, free, and the LLM often does better than a diffusion model at pure-vector tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;external_prompt_only&lt;/code&gt;&lt;/strong&gt; — the server returns a structured prompt you paste into another tool (e.g., dev.to's cover generator, Midjourney, whatever). For when you want to run the model yourself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;api&lt;/code&gt;&lt;/strong&gt; — the server routes to an actual image-generation API, returns the image. Default mode for most use cases.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Mode 1 surprised me. I built it last, thinking it was niche. It turns out "have Claude write the SVG" is correct for probably 40% of icon/logo requests. The LLM writes cleaner, more editable output than any diffusion model can.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free-tier first
&lt;/h2&gt;

&lt;p&gt;One design decision I'd recommend to anyone building similar tooling: make zero-key usage the first-class path. &lt;code&gt;prompt-to-asset&lt;/code&gt; defaults to Pollinations for drafts, falling through to HuggingFace Inference and Stable Horde if Pollinations is down. No API key required to get a first result.&lt;/p&gt;

&lt;p&gt;The paid-model routes exist for quality runs. But the first-time user who just wants to see what the tool does shouldn't have to get a Google Cloud billing account set up to see anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;The routing table grows as the model landscape changes. Every month there's a new Flux version or a new open-source diffusion model worth evaluating. The tool's longevity depends on the table staying current, not on any single routing decision being optimal.&lt;/p&gt;

&lt;p&gt;If you hit a task where the routing picks the wrong model, &lt;a href="https://github.com/MohamedAbdallah-14/prompt-to-asset/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt; with the task description and the output you got. Those reports are how the table stays honest.&lt;/p&gt;




&lt;p&gt;Repo: &lt;a href="https://github.com/MohamedAbdallah-14/prompt-to-asset" rel="noopener noreferrer"&gt;github.com/MohamedAbdallah-14/prompt-to-asset&lt;/a&gt;. MIT licensed. MCP server, installable in Claude Code + compatible clients.&lt;/p&gt;




</description>
      <category>ai</category>
      <category>opensource</category>
      <category>mcp</category>
      <category>typescript</category>
    </item>
    <item>
      <title>The AI writing tic I couldn't stop seeing after building a humanizer</title>
      <dc:creator>Mohamed Abdallah</dc:creator>
      <pubDate>Wed, 22 Apr 2026 05:52:37 +0000</pubDate>
      <link>https://forem.com/mohamedabdallah14/the-ai-writing-tic-i-couldnt-stop-seeing-after-building-a-humanizer-40p</link>
      <guid>https://forem.com/mohamedabdallah14/the-ai-writing-tic-i-couldnt-stop-seeing-after-building-a-humanizer-40p</guid>
      <description>&lt;h2&gt;
  
  
  The AI writing tic I couldn't stop seeing after building a humanizer
&lt;/h2&gt;

&lt;p&gt;I built &lt;a href="https://github.com/MohamedAbdallah-14/unslop" rel="noopener noreferrer"&gt;unslop&lt;/a&gt; to strip the&lt;br&gt;&lt;br&gt;
  tells that mark text as AI-generated. I thought I knew what they were. I was&lt;br&gt;
  wrong about which ones hurt most.                                                                                                                                                                                              &lt;/p&gt;

&lt;p&gt;The one I keep catching: the tricolon.                                                                                                                                                                                         &lt;/p&gt;

&lt;p&gt;"X, Y, and Z" where all three are the same abstraction level. Every AI&lt;br&gt;&lt;br&gt;
  output does this. Not because the writer means three things. Because the&lt;br&gt;
  model learned that three parallel items feel rhetorically complete.                                                                                                                                                            &lt;/p&gt;

&lt;p&gt;Before:                                                                                                                                                                                                                        &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Modern software engineering requires discipline, patience, and a deep&lt;br&gt;&lt;br&gt;
understanding of systems at scale.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After:                                                          &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Modern software engineering requires discipline. The rest comes with time.                                                                                                                                                   &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The second version has one claim and a closer. The first has three, because&lt;br&gt;&lt;br&gt;
  the model is filling the shape. Discipline, patience, and understanding of&lt;br&gt;
  systems are not three separate things — patience is part of discipline and&lt;br&gt;&lt;br&gt;
  understanding systems is what the discipline is &lt;em&gt;for&lt;/em&gt;. The tricolon hides&lt;br&gt;&lt;br&gt;
  that by flattening everything to a list.                                                                                                                                                                                       &lt;/p&gt;

&lt;p&gt;Once I saw it, I couldn't unsee it. Every cover letter I've read from a&lt;br&gt;&lt;br&gt;
  junior dev in the last year opens with a tricolon. Every LinkedIn post from&lt;br&gt;&lt;br&gt;
  a thought-leader account has one in the third paragraph. Every "humanizer"&lt;br&gt;&lt;br&gt;
  tool I tested left it in, because it's grammatically correct and&lt;br&gt;&lt;br&gt;
  individually the words are fine.                                                                                                                                                                                               &lt;/p&gt;

&lt;p&gt;&lt;code&gt;unslop&lt;/code&gt; kills tricolons at the &lt;code&gt;balanced&lt;/code&gt; and &lt;code&gt;full&lt;/code&gt; intensity levels. It&lt;br&gt;&lt;br&gt;
  replaces them with the single claim or pair the writer probably meant. In&lt;br&gt;&lt;br&gt;
  the &lt;code&gt;full&lt;/code&gt; level, about a third of tricolons get replaced with a period and&lt;br&gt;&lt;br&gt;
  a fragment, because that's what the claim usually needed.                                                                                                                                                                      &lt;/p&gt;

&lt;p&gt;If you write with AI assistance: read your output for tricolons. You don't&lt;br&gt;&lt;br&gt;
  need a tool. You just need to notice that three things, listed, usually&lt;br&gt;&lt;br&gt;
  mean one thing or two, padded.                                                                                                                                                                                                 &lt;/p&gt;

&lt;p&gt;If you're curious about what else shows up: the tool is MIT-licensed,&lt;br&gt;&lt;br&gt;
  installs as a Claude Code plugin, and lists the full pattern catalog in its&lt;br&gt;&lt;br&gt;
  README.                                                                                                                                                                                                                        &lt;/p&gt;

&lt;p&gt;—                                                                                                                                                                                                                              &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MohamedAbdallah-14/unslop" rel="noopener noreferrer"&gt;unslop on GitHub&lt;/a&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>writing</category>
      <category>llm</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
