<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: lofder.issac</title>
    <description>The latest articles on Forem by lofder.issac (@_95a3e57463e6442feacd0).</description>
    <link>https://forem.com/_95a3e57463e6442feacd0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/_95a3e57463e6442feacd0"/>
    <language>en</language>
    <item>
      <title>your supplier just raised prices. now what?</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Thu, 09 Apr 2026 19:45:48 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/your-supplier-just-raised-prices-now-what-2eig</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/your-supplier-just-raised-prices-now-what-2eig</guid>
      <description>&lt;p&gt;Last Tuesday one of my AliExpress suppliers bumped the price on an LED therapy mask from $18 to $26. Overnight. No warning. The product was already live on two Shopify stores with a $32 sell price — margin went from comfortable to barely breaking even.&lt;/p&gt;

&lt;p&gt;If you run a dropshipping operation, this isn't news. Suppliers raise prices, go out of stock, or vanish. The standard fix is manual: open DSers, find the product, search for alternatives, compare SKU variants one by one, update the mapping. For 40+ products across multiple stores, that's a full afternoon gone. Every month.&lt;/p&gt;

&lt;p&gt;So I built an MCP tool that does it automatically — supplier search, product scoring, variant matching, mapping update. Here's what went into it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; — &lt;code&gt;dsers_replace_mapping&lt;/code&gt; is a new MCP tool (v1.5.0 of &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;@lofder/dsers-mcp-product&lt;/a&gt;) that takes a live DSers product, searches AliExpress for cheaper suppliers, scores candidates across 5 dimensions, matches SKU variants with a three-tier algorithm (exact → context → fuzzy), and optionally writes the new mapping. Auto-apply only fires when every variant match exceeds strict confidence thresholds. Works with Claude, Cursor, Windsurf, or any MCP-compatible AI agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why automated supplier replacement is hard
&lt;/h2&gt;

&lt;p&gt;Finding a cheaper supplier is the easy part. AliExpress has thousands of sellers for any given product. The hard part is making sure it's actually the &lt;em&gt;same&lt;/em&gt; product with &lt;em&gt;compatible&lt;/em&gt; SKU variants.&lt;/p&gt;

&lt;p&gt;Take that LED mask. My current listing has two variants: "EU PLUG (220-240V)" and "US PLUG (100-110V)". A replacement supplier might list the same options as "European Standard" and "American Standard". Or "220V" and "110V". Or they might bundle the plug type with a "Ships From" option, creating a 2×3 variant matrix where I had 2×1.&lt;/p&gt;

&lt;p&gt;If you get this wrong, a customer orders the EU plug version and receives a UK plug. Chargeback, bad review, account risk.&lt;/p&gt;

&lt;p&gt;This is a supply chain matching problem, and it's messier than it looks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the product scoring pipeline works
&lt;/h2&gt;

&lt;p&gt;I ended up building a multi-stage pipeline. First, normalize everything — strip the marketing noise from titles ("[2026 Trending] HOT SALE" becomes "led therapy mask with neck"), parse structured specs out of option values (extracting &lt;code&gt;plug=eu plug&lt;/code&gt; and &lt;code&gt;voltage=220v&lt;/code&gt; from "EU PLUG (220-240V)"), and ignore shipping axes entirely since "Ships From: China" tells you nothing about the product itself.&lt;/p&gt;

&lt;p&gt;Then score candidates across five signals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Structured spec overlap&lt;/strong&gt; (35% weight) — quantity, capacity, plug type, voltage. These are the hard constraints. If the candidate has "UK Plug" and you need "EU Plug", it's a blocker regardless of everything else.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Option value coverage&lt;/strong&gt; (25%) — what percentage of your current variant option pairs exist in the candidate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Title token similarity&lt;/strong&gt; (20%) — Jaccard similarity on cleaned title tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image similarity&lt;/strong&gt; (10%) — basename comparison on product image URLs. Crude, but surprisingly effective on AliExpress where the same factory photo gets reused.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market signals&lt;/strong&gt; (10%) — stock availability, rating, order volume. A supplier with zero stock or 2-star rating isn't useful even if the product matches perfectly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole scoring function is about 30 lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;productScore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;roundScore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;titleSimilarityScore&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="nx"&gt;structuredSpecScore&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.35&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="nx"&gt;optionCoverageScore&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.25&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="nx"&gt;imageSimilarityScore&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="nx"&gt;marketSignalScore&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Three-tier SKU variant matching
&lt;/h2&gt;

&lt;p&gt;Product-level scoring gets you a ranked list of candidates. But you still need to map &lt;em&gt;each variant&lt;/em&gt; from your current supplier to the new one. This is where most automation attempts fall apart.&lt;/p&gt;

&lt;p&gt;I went with three tiers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Exact signature match&lt;/strong&gt; — both sides normalize to &lt;code&gt;plug=eu plug&lt;/code&gt; → instant 1:1 mapping, confidence = 1.0.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context match&lt;/strong&gt; — use the ignored axes (like shipping origin) as tie-breakers when two variants share the same product signature but differ on logistics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fuzzy match&lt;/strong&gt; — for everything else, score option pairs (45%), context option pairs (25%), title tokens (15%), image similarity (10%), and price neighborhood (5%). Only accept matches above 0.6, and require a meaningful gap between the best and second-best match to avoid ambiguous pairings.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Auto-apply only triggers when &lt;em&gt;every single variant&lt;/em&gt; maps with high confidence. One ambiguous match and the whole thing drops to "analysis only" — the tool shows you what it found, but won't touch your live mapping without human confirmation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running it as an MCP tool with AI agents
&lt;/h2&gt;

&lt;p&gt;This whole pipeline became the 13th tool in my open-source MCP server for DSers dropshipping. The tool is called &lt;code&gt;dsers_replace_mapping&lt;/code&gt;. You give it a product ID and store ID, it pulls the current mapping, searches for alternatives, scores them, matches variants, and returns a structured report.&lt;/p&gt;

&lt;p&gt;An AI agent can use it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Check product dp-8291 in store st-102.
If the supplier price went up more than 20%,
find a cheaper alternative and update the mapping.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent calls &lt;code&gt;dsers_my_products&lt;/code&gt; to get the current price, calls &lt;code&gt;dsers_replace_mapping&lt;/code&gt; with &lt;code&gt;auto_apply=false&lt;/code&gt; first to see the analysis, evaluates whether the top candidate is acceptable, and only then calls it again with &lt;code&gt;auto_apply=true&lt;/code&gt; if everything checks out.&lt;/p&gt;

&lt;p&gt;The response includes confidence scores, blocked reasons, variant match details, and a full mapping preview — enough for the agent (or a human) to make an informed decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Normalizing DSers API responses
&lt;/h2&gt;

&lt;p&gt;Building the scoring was the fun part. The painful part was normalizing the wild variety of API response shapes from DSers.&lt;/p&gt;

&lt;p&gt;The same product detail can arrive with variants under &lt;code&gt;variants&lt;/code&gt;, &lt;code&gt;skuList&lt;/code&gt;, &lt;code&gt;variantList&lt;/code&gt;, or &lt;code&gt;productSkuList&lt;/code&gt;. Prices might be in cents or dollars — sometimes in the same response. Option values sometimes live in &lt;code&gt;optionsSnapshot&lt;/code&gt;, sometimes in &lt;code&gt;options&lt;/code&gt;, sometimes both with different structures. I wrote a &lt;code&gt;pickBestProductNode&lt;/code&gt; function that recursively scores every nested object in the API response and picks the one that looks most like a product.&lt;/p&gt;

&lt;p&gt;Not elegant. But it works on every response shape I've thrown at it so far.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scoring thresholds and test coverage
&lt;/h2&gt;

&lt;p&gt;The feature adds about 1,500 lines of logic across three modules, plus 950 lines of tests covering normalization, scoring, variant matching, and the full flow with mocked API calls. Build passes, 312 tests green.&lt;/p&gt;

&lt;p&gt;For the scoring thresholds: auto-apply requires &lt;code&gt;product_score &amp;gt;= 0.88&lt;/code&gt; at the product level, plus &lt;code&gt;score &amp;gt;= 0.93&lt;/code&gt; and &lt;code&gt;score_gap &amp;gt;= 0.1&lt;/code&gt; at the variant level. These numbers came from testing against ~30 real product pairs. They're conservative on purpose — false positives on supplier replacement cost real money.&lt;/p&gt;




&lt;p&gt;This ships as v1.5.0 of &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;@lofder/dsers-mcp-product&lt;/a&gt;. It's the first MCP server I know of that does automated supply chain supplier replacement with variant-level matching. If you're running a dropshipping operation and tired of the supplier treadmill, the tool might save you some afternoons.&lt;/p&gt;

&lt;p&gt;If you've tried automating supplier matching in your own stack — what signals did you end up relying on? I'm especially curious whether anyone's gotten CLIP-based image matching to work reliably for e-commerce product comparison.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>dropshipping</category>
      <category>shopify</category>
    </item>
    <item>
      <title>pull requests aren't dying. your review process is.</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Thu, 09 Apr 2026 14:09:47 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/pull-requests-arent-dying-your-review-process-is-k37</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/pull-requests-arent-dying-your-review-process-is-k37</guid>
      <description>&lt;p&gt;There's a take going around that pull requests are dying. Cognition, OpenAI, half of AI Twitter — everyone seems to agree that PRs can't survive the age of coding agents.&lt;/p&gt;

&lt;p&gt;I've been reading the research they're citing. The data is real. The conclusion isn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bottleneck is real
&lt;/h2&gt;

&lt;p&gt;Faros AI analyzed 1,255 engineering teams and 10,000+ developers. The numbers are hard to argue with: teams using AI coding tools merged 98% more PRs, but code review time went up 91%. PR size increased 154%. Bug rate went up 9%.&lt;/p&gt;

&lt;p&gt;So yes — if your team adopted Copilot or Cursor or Codex and suddenly every engineer is producing twice the output, your reviewers are drowning. That part is true.&lt;/p&gt;

&lt;p&gt;But "reviewers are drowning" and "PRs are dying" are very different statements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkisyg0bsjor3p1u0753d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkisyg0bsjor3p1u0753d.png" alt="AI output vs human review — the real bottleneck" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually broken
&lt;/h2&gt;

&lt;p&gt;The problem isn't the pull request as a concept. It's that most teams never built their review process to handle variable throughput.&lt;/p&gt;

&lt;p&gt;Think about it. Before AI tools, your team probably had an unspoken rhythm — maybe 2-3 PRs per developer per day, each one a few hundred lines. Reviewers could keep up because the input rate was roughly constant. Nobody designed it that way, it just settled into a natural pace.&lt;/p&gt;

&lt;p&gt;AI broke that pace. Now one developer can produce 5-8 PRs in a day, some of them 500+ lines. The review process didn't change, so it buckled.&lt;/p&gt;

&lt;p&gt;But that's a process problem, not a format problem. You don't throw away email because you got too many of them. You build filters, labels, priorities. Same applies here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three things the "PR is dead" crowd gets wrong
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. They assume every team has this problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Faros studied high-AI-adoption teams — the ones pushing the hardest on agent-driven development. Most engineering teams in 2026 are not there yet. They're using autocomplete and chat, not autonomous coding agents producing hundreds of lines unsupervised. For them, PRs work fine. The bottleneck doesn't exist.&lt;/p&gt;

&lt;p&gt;The "PR is dead" narrative takes a frontier problem and projects it onto the entire industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. They ignore compliance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In fintech, healthcare, defense, and any industry with SOC 2 / ISO 27001 / HIPAA requirements, human code review isn't optional. It's a regulatory checkbox. You can make it faster, you can augment it with AI, but you can't eliminate it. An AI agent approving another AI agent's code doesn't satisfy an auditor.&lt;/p&gt;

&lt;p&gt;Nobody in the "kill the PR" camp mentions this. Probably because most of them work at AI startups where compliance means adding a checkbox to the settings page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. They understate the cost of alternatives.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Devin is $500/month per seat. Running a Managed Agent is &lt;code&gt;$0.08/hr&lt;/code&gt; plus token costs. OpenAI's Symphony requires Elixir infrastructure and a team that understands agent orchestration. Specification-driven development with BDD requires rewriting your entire planning process.&lt;/p&gt;

&lt;p&gt;These aren't free upgrades. For a 5-person startup or a 20-person agency, "just adopt Devin Review and Symphony" is not a realistic answer to "my PRs take too long to review."&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually works
&lt;/h2&gt;

&lt;p&gt;I've been running an open-source MCP server that handles dropshipping automation — product sourcing, imports, supplier replacement, pushing to Shopify. It generates structured, deterministic output from well-defined tool calls. When my AI agent calls &lt;code&gt;dsers_replace_mapping&lt;/code&gt;, the result is a JSON payload with confidence scores, variant matches, and blocking reasons. A reviewer can scan that in 30 seconds.&lt;/p&gt;

&lt;p&gt;Compare that to a coding agent that rewrites 400 lines of business logic. The review burden is completely different.&lt;/p&gt;

&lt;p&gt;The pattern I keep seeing in teams that don't have a review bottleneck:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Smaller PRs.&lt;/strong&gt; Not because of policy — because the tools produce focused, scoped changes. MCP tools, well-structured agent tasks, single-responsibility operations. If your agent is producing 500-line PRs, your task decomposition is the problem, not your review process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Machine-checkable constraints.&lt;/strong&gt; Architecture rules in linters, type systems, CI checks that catch structural issues before a human ever sees the diff. Can Boluk showed that just changing the editing format improved 15 LLMs by 5-14 percentage points. The right harness reduces review load more than any review tool.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Human review for intent, not syntax.&lt;/strong&gt; The OpenAI Harness Engineering experiment got this right — engineers review the plan, the spec, the acceptance criteria. The code is validated by machines. But this doesn't mean PRs go away. It means the PR contains a spec + test results + a diff, and the human reviews the first two.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The actual future
&lt;/h2&gt;

&lt;p&gt;PRs will evolve. They'll carry richer metadata — agent confidence scores, test coverage deltas, architectural impact summaries. The diff will become one section of a larger review artifact, not the whole thing.&lt;/p&gt;

&lt;p&gt;Some teams will move to trunk-based development with post-merge verification. Some will adopt tiered review where low-risk changes get auto-merged with monitoring. Some will keep traditional PR review because their domain requires it.&lt;/p&gt;

&lt;p&gt;What won't happen is a universal replacement. The "after the pull request" framing assumes there's a single successor. There isn't. There's a spectrum of review strategies, and teams will pick based on their risk tolerance, compliance requirements, and team size.&lt;/p&gt;

&lt;p&gt;The Kevlin Henney quote keeps getting thrown around — "describing a program in unambiguous detail is programming." It's true, and it's exactly why specification-driven development won't replace code review either. Writing a perfect spec is as hard as writing perfect code. We'll need review at every level of abstraction, just with different tools.&lt;/p&gt;




&lt;p&gt;If your review process is drowning, fix your review process. Break PRs into smaller pieces. Add machine checks. Move intent review upstream. But don't throw away the mechanism — it's one of the few things in software engineering that actually works at scale across every team size and industry.&lt;/p&gt;

&lt;p&gt;PRs survived waterfall, survived agile, survived microservices. They'll survive AI agents too. They just need better tooling around them, not a funeral.&lt;/p&gt;

&lt;p&gt;What's your team's experience — are you actually hitting the review bottleneck, or is this still a theoretical problem for you?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Anthropic just mass-produced the agent. who's building the hands?</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:30:34 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/anthropic-just-mass-produced-the-agent-whos-building-the-hands-1mfi</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/anthropic-just-mass-produced-the-agent-whos-building-the-hands-1mfi</guid>
      <description>&lt;p&gt;Yesterday Anthropic shipped &lt;a href="https://platform.claude.com/docs/en/managed-agents/overview" rel="noopener noreferrer"&gt;Claude Managed Agents&lt;/a&gt;. If you haven't seen it yet — they're now hosting the full agent runtime for you. Session persistence, sandboxed execution, error recovery, tool orchestration. You define what the agent should do, they handle everything underneath.&lt;/p&gt;

&lt;p&gt;It's a big deal. But I think most people are looking at the wrong part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everyone's talking about the brain. Nobody's talking about the hands.
&lt;/h2&gt;

&lt;p&gt;Managed Agents gives you a brain that can reason, plan, and recover from failures across long-running sessions. It gives you bash, file I/O, web search. The infrastructure layer is genuinely impressive — &lt;code&gt;$0.08/hr&lt;/code&gt; per agent, auto-recovery on disconnect, built-in prompt caching.&lt;/p&gt;

&lt;p&gt;But here's what it can't do out of the box: anything domain-specific.&lt;/p&gt;

&lt;p&gt;It can't check if your AliExpress supplier went offline. It can't search for a replacement. It can't compare SKU variant mappings between two suppliers who call the same color "Navy Blue" and "深蓝色" respectively. It can't push a product to your Shopify store.&lt;/p&gt;

&lt;p&gt;The brain is there. The hands are missing.&lt;/p&gt;

&lt;p&gt;That's where MCP servers come in. Managed Agents natively supports MCP — you declare your servers in the agent config, and the agent discovers and calls your tools like it would call bash or web search. No extra plumbing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually plugged in
&lt;/h2&gt;

&lt;p&gt;I've been building an open-source MCP server for dropshipping automation — it connects to &lt;a href="https://www.dsers.com/" rel="noopener noreferrer"&gt;DSers&lt;/a&gt; and handles everything from product sourcing to store push. Last week I added a supplier replacement tool that searches for alternatives, scores candidates, matches SKU variants, and optionally swaps the mapping. The kind of supply chain operation that used to take a seller 2-3 hours of manual work per product.&lt;/p&gt;

&lt;p&gt;Connecting it to Managed Agents took about 12 lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;claude_agent_sdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ClaudeAgentOptions&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Check product dp-8291 in store st-102. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If the current supplier&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s price increased more than 20%, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;find a cheaper alternative and update the mapping.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ClaudeAgentOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;mcp_servers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dsers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;command&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;npx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;args&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@lofder/dsers-mcp-product&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. The agent picks up the MCP tools, figures out the right call sequence — fetch current mapping, search suppliers, score candidates, match variants, apply if confidence is high enough — and runs it autonomously. If the session disconnects, Managed Agents picks up where it left off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for supply chain automation
&lt;/h2&gt;

&lt;p&gt;Dropshipping supply chains break constantly. Suppliers vanish, prices spike overnight, stock runs dry on your best-selling variants. Traditionally you find out when a customer complains. Or if you're diligent, you check manually every few days.&lt;/p&gt;

&lt;p&gt;With a Managed Agent running 24/7, you can set up something like: "every 6 hours, scan my top 50 products, flag any supplier that's down or 15%+ more expensive, and auto-replace if a high-confidence alternative exists." The agent handles the scheduling loop, the MCP server handles the domain logic. You pay &lt;code&gt;$0.08/hr&lt;/code&gt; for the runtime and whatever tokens the model consumes.&lt;/p&gt;

&lt;p&gt;Honestly, the token cost is the part I'm still watching. My supplier replacement tool does a lot of API calls — search, detail fetch, scoring, variant matching — and each of those round-trips adds context. For a single product check it's fine. For 50 products on repeat, I haven't done the math yet. That's going to depend heavily on how well Managed Agents handles context compaction between iterations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The interesting tradeoff nobody's mentioning
&lt;/h2&gt;

&lt;p&gt;Managed Agents is opinionated about architecture. Anthropic hosts the brain, the sandbox, the session. You bring the tools. This is great if you want to move fast — you don't build an agent loop, you don't manage containers, you don't deal with state persistence.&lt;/p&gt;

&lt;p&gt;But it also means your agent only runs on Claude. You can't swap the model. You can't self-host. You're renting the runtime from Anthropic and paying per-hour plus per-token.&lt;/p&gt;

&lt;p&gt;For my use case that's acceptable — dropshipping sellers already pay for Shopify, DSers, ad platforms. Adding &lt;code&gt;$0.08/hr&lt;/code&gt; for an agent that monitors their supply chain is cheap compared to losing a best-seller for 3 days because the supplier silently went 404.&lt;/p&gt;

&lt;p&gt;But if you're building something where model portability matters, or where you need to run in your own VPC for compliance, Managed Agents isn't the answer. The MCP server still works everywhere else — Cursor, Claude Desktop, any MCP-compatible client. The Managed Agents integration is just one more deployment target.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's still missing
&lt;/h2&gt;

&lt;p&gt;A few things I ran into during setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No cron/schedule primitive.&lt;/strong&gt; You can't tell a Managed Agent "run this every 6 hours." You'd need an external scheduler (Lambda, cron job, whatever) that creates sessions on a timer. Seems like an obvious addition they'll probably ship soon.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP auth in the sandbox.&lt;/strong&gt; My MCP server uses OAuth — the user runs &lt;code&gt;npx @lofder/dsers-mcp-product login&lt;/code&gt; once locally and the token persists. In the Managed Agents sandbox, you'd need to inject the token via environment variables or mount a credentials file. Workable, but not as smooth as the local experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability.&lt;/strong&gt; I can see events in the Claude Console, but I want structured logs — "agent checked 50 products, replaced 3 suppliers, skipped 2 due to low confidence." Right now I'd have to parse that from the event stream myself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these are blockers. They're the kind of rough edges you expect from a beta that launched literally yesterday.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;MCP servers have been around for a while, but they've mostly lived in developer tools — Cursor, Claude Desktop, VS Code. Managed Agents changes the distribution model. Now an MCP server isn't just a dev tool plugin. It's a building block for autonomous business processes.&lt;/p&gt;

&lt;p&gt;A supply chain monitoring agent. An inventory rebalancing agent. A competitive pricing agent. These are all just "a brain + the right MCP tools + a schedule." Managed Agents handles the first part. The tools are what make it useful.&lt;/p&gt;

&lt;p&gt;If you're building MCP servers, I think this is worth paying attention to. The surface area for your tools just got a lot bigger.&lt;/p&gt;




&lt;p&gt;The MCP server is open source: &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;github.com/lofder/dsers-mcp-product&lt;/a&gt;. The supplier replacement tool (&lt;code&gt;dsers_replace_mapping&lt;/code&gt;) ships in the next release. If you're running dropshipping on DSers, it works today with Cursor and Claude Desktop, and now with Managed Agents too.&lt;/p&gt;

&lt;p&gt;Curious if anyone else has tried plugging production MCP servers into Managed Agents — what was your experience? I'm especially interested in how token costs scale for long-running sessions.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>dropshipping</category>
      <category>automation</category>
    </item>
    <item>
      <title>I Killed My OpenClaw — Built the Memory, the Gateway, the Patches. Then the Token Bill Arrived.</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Wed, 08 Apr 2026 16:18:57 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/i-killed-my-openclaw-built-the-memory-the-gateway-the-patches-then-the-token-bill-arrived-2585</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/i-killed-my-openclaw-built-the-memory-the-gateway-the-patches-then-the-token-bill-arrived-2585</guid>
      <description>&lt;h2&gt;
  
  
  What I Actually Built
&lt;/h2&gt;

&lt;p&gt;Between March and April 2026, I shipped 3 projects around the OpenClaw ecosystem. Not forks. Original work.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Engram — Scope-Aware Memory for Multi-Agent AI
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/lofder/Engram" rel="noopener noreferrer"&gt;lofder/Engram&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem with OpenClaw's memory was simple: it's file-driven. You write a &lt;code&gt;SOUL.md&lt;/code&gt;, you manually curate skills as markdown files, and the AI loads everything into context every single time. More memories = more tokens = more money. And nothing gets cleaned up automatically.&lt;/p&gt;

&lt;p&gt;I built Engram as the fix. It's a full memory architecture powered by Mem0 + Qdrant + MCP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scoped memory&lt;/strong&gt; — &lt;code&gt;global&lt;/code&gt;, &lt;code&gt;group:project-x&lt;/code&gt;, &lt;code&gt;dm&lt;/code&gt;, &lt;code&gt;agent:coder&lt;/code&gt;. Different memory pools for different contexts. Your coding preferences don't pollute your email agent's memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;7 memory types&lt;/strong&gt; — &lt;code&gt;preference&lt;/code&gt;, &lt;code&gt;fact&lt;/code&gt;, &lt;code&gt;procedure&lt;/code&gt;, &lt;code&gt;lesson&lt;/code&gt;, &lt;code&gt;decision&lt;/code&gt;, &lt;code&gt;task_log&lt;/code&gt;, &lt;code&gt;knowledge&lt;/code&gt;. Not just "remember this" — structured categories that the retrieval system can filter on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-cleaning&lt;/strong&gt; — duplicates merge automatically. Old task logs get summarized into compact knowledge. Stale entries fade. The AI manages its own memory instead of you maintaining files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forgetting logic&lt;/strong&gt; — this is the part nobody else does. Remembering is easy. Knowing what to &lt;em&gt;forget&lt;/em&gt; is the hard problem. Engram tracks memory age, access frequency, and relevance decay. A task log from 3 weeks ago that was never recalled again? It gets compressed into a one-line summary, then eventually dropped. A preference you stated on day 1 that gets recalled every session? It stays forever. The AI doesn't just accumulate — it curates. Without this, every memory system eventually drowns in its own context, and you're back to paying for 12K tokens of stale history on every request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust scoring&lt;/strong&gt; — &lt;code&gt;high&lt;/code&gt; / &lt;code&gt;medium&lt;/code&gt; / &lt;code&gt;low&lt;/code&gt;. User-stated preferences are high trust. AI inferences are low trust. When memories conflict, trust breaks the tie.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5 MCP tools&lt;/strong&gt; — &lt;code&gt;mem0_add&lt;/code&gt;, &lt;code&gt;mem0_recall&lt;/code&gt;, &lt;code&gt;mem0_search&lt;/code&gt;, &lt;code&gt;mem0_delete&lt;/code&gt;, &lt;code&gt;mem0_compact&lt;/code&gt;. Any MCP-compatible client can call them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The design philosophy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OpenClaw:  You write rules.md → AI reads it → you update rules.md → repeat forever
Engram:    AI remembers on its own → compresses over time → you never maintain a file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When Hermes Agent launched with "self-improving procedural memory" as its headline feature, I had a moment. Because Engram already did this — and more. Hermes stores skills as markdown files and uses LLM summarization for compression. It remembers, but it doesn't forget. There's no decay, no lifecycle, no "this memory is 3 weeks old and was never useful — drop it." Engram has typed memory categories, vector-based semantic retrieval, trust scoring, scope isolation, forgetting logic, and automatic lifecycle management.&lt;/p&gt;

&lt;p&gt;But Engram ran on OpenClaw. And OpenClaw ran on tokens. And tokens ran on money I didn't have.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Durable Gateway Runtime — Multi-Channel Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/lofder/durable-gateway-runtime" rel="noopener noreferrer"&gt;lofder/durable-gateway-runtime&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenClaw's gateway is its core — the long-running Node.js process that connects WhatsApp, Telegram, Slack, etc. to the AI. But the architecture docs were scattered, and the execution model had gaps when you tried to scale beyond a single instance.&lt;/p&gt;

&lt;p&gt;I wrote a full architecture document for a multi-channel gateway and execution model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ingress normalization&lt;/strong&gt; — how to standardize messages from different platforms into a unified format&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution skeleton&lt;/strong&gt; — the task queue, context assembly, and tool execution pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State durability&lt;/strong&gt; — how to persist conversation state across restarts without losing context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Channel routing&lt;/strong&gt; — how to route different groups/users to isolated agent instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was meant to be the "how to actually run OpenClaw in production" guide. Not just &lt;code&gt;npm start&lt;/code&gt; on your laptop — real multi-tenant, crash-recoverable deployment.&lt;/p&gt;

&lt;p&gt;I never finished the implementation. The architecture docs are public. The code is experimental. The reason I stopped? Same as everything else: tokens.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Gateway Stability Patch — Production Hotfix Toolkit
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/lofder/openclaw-gateway-stability-patch" rel="noopener noreferrer"&gt;lofder/openclaw-gateway-stability-patch&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This one came from pain. I was running OpenClaw with multiple channels, and the gateway kept crashing. WebSocket handshake races. Connect-challenge timeout drift. Retryable pre-connect closes that weren't actually being retried.&lt;/p&gt;

&lt;p&gt;So I built a proper overlay toolkit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rule-based patches&lt;/strong&gt; — configurable handshake timeout, connect-challenge timeout, bounded retry for loopback failures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;apply/check/rollback CLI&lt;/strong&gt; — not "edit the file and hope." A proper workflow with backups, manifests, and integrity checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version-strict&lt;/strong&gt; — refuses to patch if the runtime version doesn't match. No silent breakage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idempotent&lt;/strong&gt; — run &lt;code&gt;apply&lt;/code&gt; twice, get the same result. No duplicate patches stacking up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pure Python, zero dependencies, MIT licensed. It's the kind of boring infrastructure work that nobody stars on GitHub but everybody needs in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Token Bill That Killed It All
&lt;/h2&gt;

&lt;p&gt;Let me tell you how it feels to watch money evaporate.&lt;/p&gt;

&lt;p&gt;You build something you're proud of. Engram is humming. The gateway is stable (thanks to your own patches). Three channels are connected. You go to bed thinking "this is finally working."&lt;/p&gt;

&lt;p&gt;You wake up. Check the API dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$14.37 overnight. While you slept.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your agent was alive. Heartbeating. Checking for tasks every 5 minutes — 288 API calls through the night. Each one loading the full conversation history + system prompt + all loaded skills + Engram memories into context. Even when there was literally nothing to do, each "nothing to do" cost tokens. Your AI was awake at 3am, spending your money to confirm that nobody had messaged it.&lt;/p&gt;

&lt;p&gt;That was the moment I started doing math I didn't want to do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why OpenClaw Eats Tokens Like It's Starving
&lt;/h3&gt;

&lt;p&gt;OpenClaw's architecture is fundamentally, structurally, &lt;em&gt;by design&lt;/em&gt; token-hungry. It's not a bug. It's how it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context loading — the silent killer.&lt;/strong&gt; Every single request ships the FULL conversation history + system prompt + loaded skills + memory. Not a summary. Not the relevant parts. Everything. A 20-message conversation with 3 loaded skills hits 8K-12K tokens per request — just for &lt;em&gt;context&lt;/em&gt;, before the AI thinks a single thought. And context tokens count on every request. So message #21 pays for all 20 previous messages again. And again. And again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heartbeat — paying to breathe.&lt;/strong&gt; OpenClaw checks for scheduled tasks periodically. Each heartbeat is a full API call with full context loading. Even "nothing to do" costs tokens. At the default 5-minute interval:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;288 heartbeats/day × 2K tokens (minimum context) = 576,000 tokens/day
                                                  = just to exist
                                                  = ~$1.73/day on Sonnet
                                                  = $52/month for NOTHING
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's $52/month before you even talk to it. Just for the privilege of having it sit there, awake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool chains — compound interest, but bad.&lt;/strong&gt; A simple task like "check my email and summarize" involves: read email (tool call + response tokens) → parse content (inference) → summarize (inference) → store to Engram (tool call) → compose response (inference). That's 4-5 inference rounds. Each round loads the growing context. One email check = ~15K tokens. Do that 3 times a day and you're burning 45K tokens on email alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model pricing — the real knife.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude Opus:    $15/M input, $75/M output
Claude Sonnet:  $3/M input, $15/M output  
Claude Haiku:   $0.25/M input, $1.25/M output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OpenClaw defaults to the best model available. If you have Opus access, it uses Opus. For &lt;em&gt;everything&lt;/em&gt;. Including heartbeats. Including "nothing to do." I watched $0.47 disappear on a single heartbeat that concluded "no pending tasks." Forty-seven cents to think about nothing.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Actual Spend
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Week 1:  Just exploring, light usage              $12
Week 2:  Added Engram + 3 channels, getting real   $47
Week 3:  Gateway stability testing, lots of restarts $38
Week 4:  Desperate optimization, model fallback     $29
──────────────────────────────────────────────────────
Total:   4 weeks                                   $126
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;$126. For a personal AI assistant. That crashed. Regularly. And needed me to SSH in and restart it.&lt;/p&gt;

&lt;p&gt;Let me put that in perspective:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$126 = 8 months of Netflix&lt;/li&gt;
&lt;li&gt;$126 = my phone bill for 3 months&lt;/li&gt;
&lt;li&gt;$126 = ChatGPT Plus for 6 months (which just works, no crashing)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this was the &lt;em&gt;optimized&lt;/em&gt; version. I had already:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switched heartbeat to 30-minute intervals&lt;/li&gt;
&lt;li&gt;Set up model fallback (Haiku for simple, Sonnet for complex)&lt;/li&gt;
&lt;li&gt;Pruned context aggressively&lt;/li&gt;
&lt;li&gt;Disabled 2 of 5 skills to reduce context size&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The unoptimized version? People report $200-1000+/month. There's a famous post on Zhihu: "25 sentences cost nearly $20." That's not rage-bait. That's Tuesday with OpenClaw on Opus.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Moment
&lt;/h3&gt;

&lt;p&gt;I was sitting at my desk. It was a Wednesday afternoon. I opened the Anthropic billing dashboard and saw the week-to-date: $31.40. For 4 days. My bank account had $847 in it.&lt;/p&gt;

&lt;p&gt;I did the math. At this rate, OpenClaw would eat 15% of my remaining savings in a month. For a side project. That I was building for fun.&lt;/p&gt;

&lt;p&gt;I opened the terminal. I typed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop openclaw &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker &lt;span class="nb"&gt;rm &lt;/span&gt;openclaw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I closed the tab and went for a walk.&lt;/p&gt;

&lt;p&gt;That walk lasted about an hour. I came back and started thinking about what I could build that would cost $0 to run.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hermes Agent: The Thing I Almost Built
&lt;/h2&gt;

&lt;p&gt;Two weeks after I killed my OpenClaw, Hermes Agent dropped. And the tech community went wild.&lt;/p&gt;

&lt;p&gt;"Self-improving AI agent!" "Procedural memory!" "Model-agnostic!" "The OpenClaw killer!"&lt;/p&gt;

&lt;p&gt;I read the architecture docs. I looked at the feature list. And I felt... recognition.&lt;/p&gt;

&lt;p&gt;Here's what Hermes Agent launched with, mapped to what I already had:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hermes Agent (launched)&lt;/th&gt;
&lt;th&gt;My stack (built earlier)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Procedural memory — auto-generates skills from experience&lt;/td&gt;
&lt;td&gt;Engram — 7 memory types, trust scoring, auto-compression, scope isolation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session history with FTS5 search&lt;/td&gt;
&lt;td&gt;Engram — Qdrant vector search + Mem0 semantic retrieval&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Model-agnostic runtime&lt;/td&gt;
&lt;td&gt;Was already using model fallback in my OpenClaw config&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CLI + TUI + messaging platforms&lt;/td&gt;
&lt;td&gt;Durable gateway runtime — multi-channel architecture with ingress normalization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pre-execution security scanner&lt;/td&gt;
&lt;td&gt;Gateway stability patch — version-strict apply/check/rollback&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cron-based scheduled tasks&lt;/td&gt;
&lt;td&gt;OpenClaw heartbeat (the thing that ate my tokens)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I'm not saying Hermes copied anything. They didn't. But the problems they're solving? I was already there. The difference: Hermes is backed by Nous Research with (presumably) a budget for running their own models. I was a solo developer paying retail API prices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where Hermes genuinely does better:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python-native (easier to hack on for ML people)&lt;/li&gt;
&lt;li&gt;The do-learn-improve loop is cleaner than my separate Engram + OpenClaw integration&lt;/li&gt;
&lt;li&gt;Zero telemetry by default is a strong stance&lt;/li&gt;
&lt;li&gt;Better documentation and community&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where my design was ahead:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Forgetting logic&lt;/strong&gt; — this is the big one. Hermes remembers. It doesn't forget. Every memory system that only accumulates eventually collapses under its own weight — context bloat, token waste, contradictory old entries polluting new decisions. Engram tracks age, access frequency, and relevance decay. It knows when to compress, when to merge, and when to let go. Knowing what to forget is harder than knowing what to remember, and Hermes doesn't even try.&lt;/li&gt;
&lt;li&gt;Engram's scoped memory (&lt;code&gt;global&lt;/code&gt; / &lt;code&gt;group&lt;/code&gt; / &lt;code&gt;dm&lt;/code&gt; / &lt;code&gt;agent&lt;/code&gt;) is more granular than Hermes' flat note system&lt;/li&gt;
&lt;li&gt;Trust scoring on memories (high/medium/low) — Hermes doesn't distinguish between user-stated facts and AI inferences&lt;/li&gt;
&lt;li&gt;The gateway stability patch addresses real production issues that Hermes hasn't faced yet (because it's still young)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The honest truth: architecture doesn't matter if you can't afford to run it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pivot: From Token Burn to Zero-Cost MCP Tooling
&lt;/h2&gt;

&lt;p&gt;That walk after &lt;code&gt;docker rm openclaw&lt;/code&gt; changed how I think about building AI tools.&lt;/p&gt;

&lt;p&gt;The question wasn't "what's the coolest thing I can build?" anymore. It was: &lt;strong&gt;"what can I build that doesn't need me to feed it money every month just to exist?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The answer was MCP tools. Not agents. Not platforms. Tools.&lt;/p&gt;

&lt;p&gt;The insight: I don't need to build the agent. Claude is an agent. Cursor is an agent. ChatGPT is an agent. Millions of people already pay for these. What they need is &lt;strong&gt;tools that plug in&lt;/strong&gt; — domain-specific logic that runs locally, costs nothing, and returns results in milliseconds.&lt;/p&gt;

&lt;p&gt;That's how &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt; happened — 12 tools and 4 prompts for dropshipping automation. And &lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;sku-matcher&lt;/a&gt; — a pure-algorithm SKU matching engine that runs in milliseconds with zero model dependency.&lt;/p&gt;

&lt;p&gt;The irony is almost painful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;OpenClaw era&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Built&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;memory system + gateway + patches&lt;/span&gt;
  &lt;span class="na"&gt;Users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
  &lt;span class="na"&gt;Cost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;$126/month&lt;/span&gt;
  &lt;span class="na"&gt;Status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dead&lt;/span&gt;

&lt;span class="na"&gt;MCP era&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Built&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dsers-mcp-product + sku-matcher&lt;/span&gt;
  &lt;span class="na"&gt;Users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;25 stars, 3900+ npm downloads&lt;/span&gt;
  &lt;span class="na"&gt;Cost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;$0/month&lt;/span&gt;
  &lt;span class="na"&gt;Status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;growing&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My $0/month MCP tools, running inside Claude or Cursor as the host agent, deliver more practical value than my entire $126/month OpenClaw setup ever did. The host agent handles conversation, memory, and orchestration. My tools just do the domain-specific work. No token bill. No gateway crashes. No heartbeat burning money at 3am while I sleep.&lt;/p&gt;

&lt;p&gt;The lesson: &lt;strong&gt;don't build the platform. Build the tool. Let someone else's platform run it. Let someone else pay the token bill.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Tell Past Me
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The "self-hosted AI agent" dream is a tax on enthusiasm.&lt;/strong&gt; Everyone who sets up OpenClaw feels like Tony Stark for the first 48 hours. Then the invoice arrives. Until local models reach cloud API quality for complex tasks, "self-hosted" just means "you pay retail token prices with no negotiating power." OpenClaw + Claude Opus is a $100+/month commitment for basic utility. That's a subscription you didn't sign up for, to a service that crashes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Beautiful architecture is worthless if you can't keep the lights on.&lt;/strong&gt; Engram's design is solid. I still believe scoped, typed, trust-scored memory is the right approach. But nobody cares about your memory architecture when you're explaining to your bank why there's a $47 charge from "Anthropic PBC" this week.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;If you can't afford the runtime, the architecture doesn't ship.&lt;/strong&gt; I still think about Engram's design. It's good work. But good work sitting in a stopped container is just a GitHub repo with a nice README.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Projects Live On (Sort Of)
&lt;/h2&gt;

&lt;p&gt;Everything is still public on GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/lofder/Engram" rel="noopener noreferrer"&gt;Engram&lt;/a&gt; — the memory architecture. If someone wants to build on it, the design is there.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/lofder/durable-gateway-runtime" rel="noopener noreferrer"&gt;durable-gateway-runtime&lt;/a&gt; — the architecture docs. Good reading material if you're designing multi-channel AI systems.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/lofder/openclaw-gateway-stability-patch" rel="noopener noreferrer"&gt;openclaw-gateway-stability-patch&lt;/a&gt; — the stability toolkit. Still useful if you're running OpenClaw in production and hitting WebSocket issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And what came after:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt; — dropshipping automation via MCP. 25 stars, 3900+ npm downloads, zero token cost. The thing that actually works.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;sku-matcher&lt;/a&gt; — SKU variant matching engine. Pure algorithm, no model, millisecond response. Being integrated into DSers MCP as automated supplier replacement.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Honestly?
&lt;/h2&gt;

&lt;p&gt;I miss it.&lt;/p&gt;

&lt;p&gt;I miss having an AI that knew me across sessions. That remembered I prefer TypeScript over Python, that I hate verbose logging, that my deployment always goes to the &lt;code&gt;staging&lt;/code&gt; branch first. Engram made the AI feel like a colleague who'd been working with me for months, not a stranger I had to re-brief every morning.&lt;/p&gt;

&lt;p&gt;I miss the gateway routing — different channels for different contexts, the Telegram bot for quick notes, the Slack integration for work stuff. It felt like having a real assistant with a real presence, not just a text box I paste into.&lt;/p&gt;

&lt;p&gt;I don't miss the crashes. I don't miss the $14 overnight surprise. I definitely don't miss the $0.47 heartbeat that thought about nothing.&lt;/p&gt;

&lt;p&gt;But the thing is — I didn't kill OpenClaw because I wanted to. I killed it because I couldn't afford it. There's a difference. If someone handed me an API key with unlimited tokens tomorrow, I'd &lt;code&gt;docker run openclaw&lt;/code&gt; before they finished the sentence. Engram is still the best memory architecture I've designed. The gateway runtime is still the most thoughtful multi-channel AI system I've documented. The stability patches still solve real problems.&lt;/p&gt;

&lt;p&gt;I just can't pay $126/month for the privilege of running them.&lt;/p&gt;

&lt;p&gt;So this article is partly a portfolio piece, partly a technical comparison, and partly me venting into the void. If you're thinking about building on OpenClaw, or Hermes, or any self-hosted AI agent — read the pricing page before you read the architecture docs. Calculate the monthly token cost before you write the first line of code. The "self-hosted AI agent" dream is real. The bill is also real.&lt;/p&gt;

&lt;p&gt;And if you're from Nous Research reading this — Hermes is good. But add forgetting logic. Your users will thank you in 3 months when their memory stores aren't 50,000 entries of stale garbage.&lt;/p&gt;

&lt;p&gt;I'll be here, building $0/month MCP tools, waiting for the day token prices drop enough that I can bring my OpenClaw back to life.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Related reading:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;I Built an MCP Server to Automate Dropshipping Product Imports&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/your-aliexpress-supplier-just-died-heres-how-ai-will-auto-replace-it-20h8"&gt;Your AliExpress Supplier Just Died — Here's How AI Will Auto-Replace It&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/your-ai-agent-has-amnesia-heres-how-to-fix-it-mcp-mem0-qdrant-4905"&gt;Your AI Agent Has Amnesia — Here's How to Fix It (MCP + Mem0 + Qdrant)&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>architecture</category>
      <category>mcp</category>
    </item>
    <item>
      <title>5 Ways to Match AliExpress Product Variants — LLM, Embedding, Vision, Rules, and Why I Chose None of Them Alone</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Wed, 08 Apr 2026 13:20:19 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/5-ways-to-match-aliexpress-product-variants-llm-embedding-vision-rules-and-why-i-chose-none-3afj</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/5-ways-to-match-aliexpress-product-variants-llm-embedding-vision-rules-and-why-i-chose-none-3afj</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; — I compared 5 technical approaches for matching product variants across AliExpress suppliers: string rules, vector embeddings, LLM prompting, vision models (CNN/CLIP), and hybrid algorithmic. Each has clear trade-offs in accuracy, speed, cost, and tolerance for real-world naming chaos. I ended up building a &lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;hybrid algorithm&lt;/a&gt; — no model, no GPU, no API call — specifically designed to run inside &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;MCP tool calls&lt;/a&gt; where latency and determinism matter. This article breaks down each approach with real AliExpress examples so you can choose what fits your stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem: Supplier Replacement Is a Matching Problem
&lt;/h2&gt;

&lt;p&gt;When a dropshipping supplier goes down — dead link, out of stock, price spike — you need to find a replacement and remap every SKU variant to the new supplier.&lt;/p&gt;

&lt;p&gt;That sounds simple until you see what AliExpress variant data actually looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Supplier A (current)         Supplier B (replacement)
───────────────────────      ─────────────────────────
Color: Navy Blue             颜色: Dark Blue
Size: XL                     尺码: XL
Ships From: China            (no Ships From option)
100*130cm                    1x1.3m
4PC-32x42cm                  4pcs 32*42
Warm White                   暖光
Color: 03                    Color: C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "Color" field might contain phone models, material types, or bare numbers. Dimensions come in every format imaginable. Languages mix within a single listing. Quantities are embedded in size strings. This is normal on AliExpress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The question is: which technology handles this chaos best, and at what cost?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I tested and compared 5 approaches. Here's what I found.&lt;/p&gt;




&lt;h2&gt;
  
  
  Approach 1: Rule-Based String Matching
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; Compare variant values using exact match, Levenshtein edit distance, Jaccard similarity, or TF-IDF cosine. Set a threshold — if similarity &amp;gt; 0.8, it's a match.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt; Python &lt;code&gt;difflib&lt;/code&gt;, &lt;code&gt;fuzzywuzzy&lt;/code&gt;, &lt;code&gt;RapidFuzz&lt;/code&gt;, custom regex.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extremely fast (~0.01ms per pair)&lt;/li&gt;
&lt;li&gt;Zero infrastructure — runs anywhere, no dependencies&lt;/li&gt;
&lt;li&gt;Predictable behavior, easy to debug&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fails on synonyms.&lt;/strong&gt; "Navy Blue" vs "Dark Blue" → Levenshtein distance = 5, cosine similarity ≈ 0.3. Not a match by any threshold that doesn't also false-positive "Dark Red" and "Dark Green."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fails on cross-language.&lt;/strong&gt; "红色" vs "Red" → zero string overlap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fails on units.&lt;/strong&gt; "100*130cm" vs "1x1.3m" → completely different strings, same dimensions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fails on composite values.&lt;/strong&gt; "4PC-32x42cm" vs "4pcs 32*42" → edit distance says these are unrelated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Fine for exact or near-exact matches. Falls apart the moment suppliers use different naming conventions — which is almost always on AliExpress.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Scorecard&lt;/strong&gt;&lt;br&gt;
⚡ Speed: &lt;code&gt;~0.01ms/pair&lt;/code&gt; — blazing fast&lt;br&gt;
💰 Cost: &lt;code&gt;$0&lt;/code&gt; — zero infrastructure&lt;br&gt;
🎯 Accuracy: &lt;code&gt;██░░░░░░░░&lt;/code&gt; 30-45%&lt;br&gt;
🌐 Cross-language: ❌&lt;br&gt;
📐 Unit conversion: ❌&lt;br&gt;
🔮 Naming chaos: ❌&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Approach 2: Vector Embeddings (Sentence-BERT, MiniLM)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; Encode variant names into high-dimensional vectors using a pre-trained model. Compute cosine similarity between vectors. Semantically similar texts end up close in vector space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt; &lt;code&gt;sentence-transformers&lt;/code&gt;, &lt;code&gt;multi-qa-MiniLM-L6-cos-v1&lt;/code&gt;, FAISS for fast retrieval, Milvus for production scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles synonyms well — "Navy Blue" and "Dark Blue" are close in embedding space&lt;/li&gt;
&lt;li&gt;Handles some cross-language matching if using multilingual models (e.g., &lt;code&gt;paraphrase-multilingual-MiniLM-L12-v2&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Scales well with vector databases for large catalogs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Struggles with units and dimensions.&lt;/strong&gt; Embeddings encode semantic meaning, but &lt;code&gt;100*130cm&lt;/code&gt; and &lt;code&gt;1x1.3m&lt;/code&gt; are not semantically related in training data — they're formatted numbers, not natural language.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Opaque codes are random noise.&lt;/strong&gt; "03" and "C" have no semantic content. Embeddings can't help.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requires a model.&lt;/strong&gt; MiniLM is small (~80MB), but you still need to load it, and inference isn't free — ~5ms per encoding on CPU, more for batches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;False positives on short strings.&lt;/strong&gt; "S" (small) and "M" (medium) are very close in embedding space because they frequently co-occur, but they're different sizes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composite values are opaque.&lt;/strong&gt; "4PC-32x42cm" embeds as a chunk — the model doesn't parse it into count=4, dimensions=32×42, unit=cm.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Significantly better than string matching for natural language variant names. But AliExpress data is often structured (numbers, units, codes), not natural language — and that's where embeddings struggle.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Scorecard&lt;/strong&gt;&lt;br&gt;
⚡ Speed: &lt;code&gt;~5ms/pair&lt;/code&gt; — fast enough&lt;br&gt;
💰 Cost: &lt;code&gt;~$0&lt;/code&gt; self-hosted, 80-200MB model RAM&lt;br&gt;
🎯 Accuracy: &lt;code&gt;████░░░░░░&lt;/code&gt; 50-65%&lt;br&gt;
🌐 Cross-language: ⚠️ with multilingual model&lt;br&gt;
📐 Unit conversion: ❌&lt;br&gt;
🔮 Naming chaos: ⚠️ synonyms yes, units/codes no&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Approach 3: LLM Prompting (GPT-4o, Claude)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; Send a prompt to an LLM with both variant lists and ask it to produce a mapping. The model uses its world knowledge to understand that "Navy Blue" = "Dark Blue", parse units, and handle cross-language text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt; OpenAI API, Anthropic API, any LLM with function calling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Given these store variants and supplier variants,
produce a JSON mapping of best matches with confidence scores.
Store: ["Navy Blue / XL", "Warm White / 100*130cm"]
Supplier: ["Dark Blue / XL", "暖光 / 1x1.3m"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best accuracy for natural language.&lt;/strong&gt; LLMs understand that "Warm White" = "暖光" = "3000K" across languages and domains.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Can handle novel cases.&lt;/strong&gt; If a supplier uses unusual terminology, the LLM's world knowledge often covers it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Can reason about composite values.&lt;/strong&gt; A good LLM can parse "4PC-32x42cm" into count + dimensions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Slow.&lt;/strong&gt; A single API call for 40×40 variants takes 3-10 seconds. If you're evaluating 10 candidate suppliers, that's 30-100 seconds just for matching.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expensive.&lt;/strong&gt; Variant matching for one product replacement burns 2K-5K tokens. At GPT-4o pricing, that's ~$0.01-0.03 per product. For a catalog scan of 500 products, that's $5-15 per run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-deterministic.&lt;/strong&gt; Same input can produce different outputs across calls. Temperature=0 helps but doesn't eliminate variance. You can't reliably cache or pre-compute results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate limits.&lt;/strong&gt; Hitting OpenAI or Anthropic API limits when doing batch operations is a real operational concern.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency kills MCP tool calls.&lt;/strong&gt; If sku matching runs inside an MCP tool (where an AI agent is already waiting for a response), adding another LLM call creates a nested latency chain. The agent's context window is burning tokens while waiting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Most accurate for one-off matching. But too slow, too expensive, and too non-deterministic for high-volume automated pipelines — especially inside MCP tool calls where the AI agent is already using an LLM for orchestration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Scorecard&lt;/strong&gt;&lt;br&gt;
⚡ Speed: &lt;code&gt;3-10s/product&lt;/code&gt; — slow (API roundtrip)&lt;br&gt;
💰 Cost: &lt;code&gt;$0.01-0.03/product&lt;/code&gt; — adds up at scale&lt;br&gt;
🎯 Accuracy: &lt;code&gt;███████░░░&lt;/code&gt; 75-90%&lt;br&gt;
🌐 Cross-language: ✅&lt;br&gt;
📐 Unit conversion: ⚠️ can reason, but not reliable&lt;br&gt;
🔮 Naming chaos: ✅ best at novel cases&lt;br&gt;
⚠️ Deterministic: ❌ — different output each call&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Approach 4: Vision Models (CNN / CLIP / VLM)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; Compare product images instead of (or in addition to) text. Use a CNN (ResNet, MobileNet), CLIP, or a Vision-Language Model to extract image features and compute similarity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt; CLIP (OpenAI), ResNet/MobileNet (TorchVision), Google Lens API, AliExpress reverse image search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solves the opaque code problem.&lt;/strong&gt; When "Color: 1, 2, 3" maps to red, blue, green — only images tell you which is which.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-supplier visual match.&lt;/strong&gt; Same factory product sold by different suppliers usually shares identical or near-identical product photos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handles language-agnostic matching.&lt;/strong&gt; Images don't have a language barrier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Heavy inference.&lt;/strong&gt; CLIP embeddings: ~50-100ms per image on GPU, ~500ms on CPU. For 40 variants with images × 10 candidates = 400 images to process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU dependency.&lt;/strong&gt; Running CNN/CLIP inference on CPU is 10× slower. Production use requires GPU infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image quality varies wildly.&lt;/strong&gt; Supplier photos range from professional studio shots to blurry phone photos. White background vs lifestyle context vs composite images with text overlays.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Can't distinguish size/quantity.&lt;/strong&gt; A photo of a placemat set looks the same whether it's 2-pack or 6-pack. Vision models can't read text in images reliably for unit differentiation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overkill for most matching.&lt;/strong&gt; When "Navy Blue" and "Dark Blue" can be matched by a synonym table, launching a vision model is using a cannon to kill a fly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Essential for the specific case of opaque color codes (numbers or letters instead of color names). But too expensive and slow to be the primary matching method. Best used as a fallback layer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Scorecard&lt;/strong&gt;&lt;br&gt;
⚡ Speed: &lt;code&gt;50-500ms/image&lt;/code&gt; — heavy inference&lt;br&gt;
💰 Cost: &lt;code&gt;~$0&lt;/code&gt; self-hosted, but GPU required&lt;br&gt;
🎯 Accuracy: &lt;code&gt;█████░░░░░&lt;/code&gt; 60-80% (visual only)&lt;br&gt;
🌐 Cross-language: ✅ images have no language&lt;br&gt;
📐 Unit conversion: ❌ can't read quantities from photos&lt;br&gt;
🔮 Naming chaos: ⚠️ images yes, units/quantities no&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Approach 5: Hybrid Algorithmic (sku-matcher)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; A purpose-built matching engine that combines multiple lightweight techniques in layers, designed specifically for AliExpress variant data patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt; &lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;sku-matcher&lt;/a&gt; — open-source, single TypeScript file, ~1200 lines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three matching layers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1: Text matching with synonym tables&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exact match, case-normalized match, substring containment&lt;/li&gt;
&lt;li&gt;Synonym lookup covering 5 languages (English, Chinese, French, German, Russian) for colors, sizes, materials&lt;/li&gt;
&lt;li&gt;Opaque code detection — recognizes that bare "A", "B", "01", "02" carry low information and scores accordingly&lt;/li&gt;
&lt;li&gt;Option name alignment — maps "Color" ↔ "颜色", "Emitting Color" ↔ "颜色", "Size" ↔ "尺码"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: Unit-aware parsing and conversion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parses composite values: &lt;code&gt;4PC-32x42cm&lt;/code&gt; → qty:4, dimensions:32×42cm&lt;/li&gt;
&lt;li&gt;Converts between unit systems: g↔kg, cm↔m↔mm↔inch, ml↔L, pcs↔pieces↔片↔件&lt;/li&gt;
&lt;li&gt;Dimension matching with tolerance: &lt;code&gt;100*130cm&lt;/code&gt; matches &lt;code&gt;1x1.3m&lt;/code&gt; (both = 1000×1300mm)&lt;/li&gt;
&lt;li&gt;Area-equivalent detection: &lt;code&gt;100×200cm&lt;/code&gt; matches &lt;code&gt;200×100cm&lt;/code&gt; (same area, different order)&lt;/li&gt;
&lt;li&gt;Unit-family alignment: if 80%+ of one option's values are weights (g/kg/oz), it aligns with any other option that's also weights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Layer 3: Image matching via dHash&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perceptual hashing (dHash): resize to 9×8 grayscale, compare adjacent pixels → 64-bit fingerprint&lt;/li&gt;
&lt;li&gt;Hamming distance comparison: ≤5 = same image, ≤10 = highly similar, ≤15 = likely match&lt;/li&gt;
&lt;li&gt;Only 1 dependency (&lt;code&gt;sharp&lt;/code&gt; for image processing), no GPU, no model&lt;/li&gt;
&lt;li&gt;Used as auxiliary signal, not primary — adds bonus points to text-based score&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Additional signals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Price proximity (within 5% or 15% adds bonus points)&lt;/li&gt;
&lt;li&gt;Logistics dimension filtering ("Ships From" automatically excluded from matching)&lt;/li&gt;
&lt;li&gt;Same-product detection (identical image URLs across suppliers)&lt;/li&gt;
&lt;li&gt;Unit-price analysis (detects "4-pack at $6.20" vs "1-piece at $2.83" as same unit price)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Scorecard&lt;/strong&gt;&lt;br&gt;
⚡ Speed: &lt;code&gt;~1.5ms/product&lt;/code&gt; — 100×100 pairs in 150ms&lt;br&gt;
💰 Cost: &lt;code&gt;$0&lt;/code&gt; — ~5MB RAM, zero GPU, zero API&lt;br&gt;
🎯 Accuracy: &lt;code&gt;██████░░░░&lt;/code&gt; 70-85%&lt;br&gt;
🌐 Cross-language: ✅ 5 languages built-in&lt;br&gt;
📐 Unit conversion: ✅ g↔kg, cm↔m↔inch, AxB dimensions&lt;br&gt;
🔮 Naming chaos: ✅ built specifically for it&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Head-to-Head Comparison
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Performance &amp;amp; Cost
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Cost / product&lt;/th&gt;
&lt;th&gt;GPU?&lt;/th&gt;
&lt;th&gt;Model size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;String Rules&lt;/td&gt;
&lt;td&gt;⚡⚡⚡ ~0.01ms&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Embeddings&lt;/td&gt;
&lt;td&gt;⚡⚡ ~5ms&lt;/td&gt;
&lt;td&gt;~$0&lt;/td&gt;
&lt;td&gt;optional&lt;/td&gt;
&lt;td&gt;80-200MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLM Prompting&lt;/td&gt;
&lt;td&gt;⚡ 3-10s&lt;/td&gt;
&lt;td&gt;$0.01-0.03&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;API key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vision (CLIP/CNN)&lt;/td&gt;
&lt;td&gt;⚡ 50-500ms&lt;/td&gt;
&lt;td&gt;~$0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;yes&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;200MB-2GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hybrid (sku-matcher)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⚡⚡⚡ ~1.5ms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;—&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;None&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Matching Capabilities
&lt;/h3&gt;

&lt;p&gt;Which real-world AliExpress scenarios can each approach actually handle?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Rules&lt;/th&gt;
&lt;th&gt;Embed&lt;/th&gt;
&lt;th&gt;LLM&lt;/th&gt;
&lt;th&gt;Vision&lt;/th&gt;
&lt;th&gt;Hybrid&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Grey&lt;/code&gt; → &lt;code&gt;Gray&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Navy Blue&lt;/code&gt; → &lt;code&gt;Dark Blue&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;红色&lt;/code&gt; → &lt;code&gt;Red&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;100*130cm&lt;/code&gt; → &lt;code&gt;1x1.3m&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;4PC-32x42cm&lt;/code&gt; → &lt;code&gt;4pcs 32*42&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Warm White&lt;/code&gt; → &lt;code&gt;暖光&lt;/code&gt; / &lt;code&gt;3000K&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Color: 03&lt;/code&gt; → &lt;code&gt;Color: C&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Ships From: China&lt;/code&gt; → (filtered)&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;500g&lt;/code&gt; → &lt;code&gt;0.5kg&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2-pack vs 6-pack (same photo)&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ = handles well   ⚠️ = partial / unreliable   ❌ = fails   — = not applicable&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Architecture Fit
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;Rules&lt;/th&gt;
&lt;th&gt;Embed&lt;/th&gt;
&lt;th&gt;LLM&lt;/th&gt;
&lt;th&gt;Vision&lt;/th&gt;
&lt;th&gt;Hybrid&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Deterministic output&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP tool-call ready&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;❌ slow&lt;/td&gt;
&lt;td&gt;❌ slow&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No external dependency&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Works offline&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Handles novel terms&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Accuracy on Real AliExpress Data
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;String Rules    ██░░░░░░░░░░░░░░░░░░  30-45%
Embeddings      █████░░░░░░░░░░░░░░░  50-65%
Vision (CLIP)   ██████░░░░░░░░░░░░░░  60-80%
Hybrid (ours)   ██████████░░░░░░░░░░  70-85%
LLM Prompting   ████████████░░░░░░░░  75-90%  (but 2000× slower)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Why I Chose the Hybrid Approach — and Why It Matters for MCP
&lt;/h2&gt;

&lt;p&gt;The comparison above shows an interesting pattern: &lt;strong&gt;LLM prompting is the most accurate, but the least suitable for automated pipelines.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you're building an MCP tool that an AI agent calls during a conversation, the matching engine is just one step in a larger workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Agent &lt;span class="o"&gt;(&lt;/span&gt;LLM&lt;span class="o"&gt;)&lt;/span&gt; → dsers_import_list → dsers_find_product
           → sku_matcher &lt;span class="o"&gt;(&lt;/span&gt;matching&lt;span class="o"&gt;)&lt;/span&gt; → present results → dsers_supplier_replace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent is already an LLM. Adding another LLM call for matching creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nested latency&lt;/strong&gt; — the outer LLM is waiting (and burning tokens) while the inner LLM processes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost multiplication&lt;/strong&gt; — orchestration tokens + matching tokens for every candidate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-deterministic chains&lt;/strong&gt; — two layers of randomness make debugging harder&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate limit risk&lt;/strong&gt; — two concurrent API consumers from the same pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The hybrid algorithmic approach eliminates all of this. It runs in the same Node.js process as the MCP server, returns in milliseconds, and produces identical results every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it's honestly weaker:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Novel terminology that's not in the synonym tables (LLM handles these better)&lt;/li&gt;
&lt;li&gt;Truly ambiguous cases where reasoning is needed (e.g., "is this a phone case color or phone model?")&lt;/li&gt;
&lt;li&gt;First encounter with a completely new product category&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How we compensate:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The three-tier confidence system (&lt;code&gt;auto&lt;/code&gt; / &lt;code&gt;review&lt;/code&gt; / &lt;code&gt;unmatched&lt;/code&gt;) routes uncertain cases to human review&lt;/li&gt;
&lt;li&gt;The agent (which is already an LLM) handles the &lt;code&gt;review&lt;/code&gt; cases — it can look at the low-confidence matches and apply reasoning&lt;/li&gt;
&lt;li&gt;Category-level scoring overrides let you tune the engine per product type&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the key insight: &lt;strong&gt;use the right tool at the right layer.&lt;/strong&gt; Deterministic algorithms for the 70-85% of cases that follow patterns. LLM reasoning (from the orchestrating agent) for the remaining 15-30% that need judgment.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Which approach should I use for a small store (&amp;lt; 100 products)?&lt;/strong&gt;&lt;br&gt;
LLM prompting (Approach 3) is likely fine. The cost and latency are manageable at small scale, and the accuracy is highest. If you're already using an AI agent via MCP, the agent itself can handle matching through prompting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What about a large catalog (1000+ products) with frequent supplier changes?&lt;/strong&gt;&lt;br&gt;
Hybrid algorithmic (Approach 5) or embeddings (Approach 2) as a first pass, with LLM as fallback for low-confidence matches. The key constraint is cost and speed at scale — $0.03 × 1000 products × 10 candidates = $300 per scan with pure LLM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I combine approaches?&lt;/strong&gt;&lt;br&gt;
Yes, and that's what production systems do. A common stack: string rules for exact matches → embeddings for semantic recall → LLM for final verification. sku-matcher combines rules + unit parsing + image hashing in a single call, but you can layer LLM verification on top for the &lt;code&gt;review&lt;/code&gt; cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why not just use CLIP for everything?&lt;/strong&gt;&lt;br&gt;
CLIP is powerful but slow (requires GPU for production) and can't distinguish quantities, dimensions, or unit conversions. A placemat photo looks the same whether it's a 2-pack or 6-pack. You'd still need text matching for attribute comparison.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is sku-matcher open-source?&lt;/strong&gt;&lt;br&gt;
Yes. &lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;github.com/lofder/sku-matcher&lt;/a&gt; — single TypeScript file, 8 test scenarios. It's being integrated into &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt; as the &lt;code&gt;dsers_supplier_match&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does this relate to DSers MCP Product?&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt; is an open-source MCP server (12 tools, 4 prompts) for dropshipping automation. sku-matcher is being integrated as the matching engine behind three new tools: &lt;code&gt;dsers_supplier_match&lt;/code&gt;, &lt;code&gt;dsers_supplier_replace&lt;/code&gt;, and &lt;code&gt;dsers_supplier_scan&lt;/code&gt;. Available on &lt;a href="https://www.npmjs.com/package/@lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;npm&lt;/a&gt;, &lt;a href="https://smithery.ai/server/@dsersx/product-mcp" rel="noopener noreferrer"&gt;Smithery&lt;/a&gt;, &lt;a href="https://glama.ai/mcp/servers/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;Glama.ai&lt;/a&gt;, and &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;DSers MCP Product&lt;/strong&gt; (dropshipping automation via AI agents):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @lofder/dsers-mcp-product
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or use the Remote MCP endpoint (no install):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dropshipping"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://ai.silentrillmcp.com/dropshipping/mcp"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;sku-matcher&lt;/strong&gt; (variant matching engine):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/lofder/sku-matcher.git
&lt;span class="nb"&gt;cd &lt;/span&gt;sku-matcher &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;em&gt;Also in this series:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/your-aliexpress-supplier-just-died-heres-how-ai-will-auto-replace-it-20h8"&gt;Your AliExpress Supplier Just Died — Here's How AI Will Auto-Replace It&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;I Built an MCP Server to Automate Dropshipping Product Imports&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/dropshipping-automation-tools-compared-dsers-mcp-vs-alidropify-vs-autods-vs-aerodrop-2026-4khj"&gt;Dropshipping Automation Tools Compared: DSers MCP vs AliDropify vs AutoDS vs AeroDrop (2026)&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dropshipping</category>
      <category>ai</category>
      <category>ecommerce</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Your AliExpress Supplier Just Died — Here's How AI Will Auto-Replace It</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:22:09 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/your-aliexpress-supplier-just-died-heres-how-ai-will-auto-replace-it-20h8</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/your-aliexpress-supplier-just-died-heres-how-ai-will-auto-replace-it-20h8</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; — I'm building an automated supplier replacement pipeline for dropshipping sellers. It detects dead links and stock-outs, searches AliExpress for alternatives, runs a zero-model SKU matching algorithm to remap variants, and swaps the supplier — all orchestrated by an AI agent via &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt;. The matching engine (&lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;sku-matcher&lt;/a&gt;) is open-source, designed to run inside MCP tool calls — lightweight, no model dependency, millisecond response.&lt;/p&gt;




&lt;p&gt;Every dropshipper knows the feeling. You wake up, check your dashboard, and see it: your best-selling product's supplier page returns a 404. Or worse — it's still live, but every variant says "out of stock."&lt;/p&gt;

&lt;p&gt;Now you're stuck manually finding a new supplier, and then comes the real pain: &lt;strong&gt;remapping every single SKU variant&lt;/strong&gt;. The new supplier calls "Navy Blue" → "Dark Blue." Their sizes are in Chinese. The dimensions are in a different format. You have 40 variants. It's going to take hours.&lt;/p&gt;

&lt;p&gt;I've been building tools to automate this entire workflow — from detection to replacement — and I want to share what's coming.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dropshipping Supplier Problem
&lt;/h2&gt;

&lt;p&gt;If you run a Shopify or Wix dropshipping store powered by DSers, supplier instability is a constant risk:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AliExpress listings get taken down&lt;/strong&gt; without warning — one day the URL works, the next it 404s or redirects to a category page&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stock runs out&lt;/strong&gt; on your best-selling variants — not the whole product, just the popular sizes and colors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prices spike&lt;/strong&gt; overnight, killing your margins&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suppliers silently change SKUs&lt;/strong&gt; — the listing looks the same, but internal SKU codes shift, breaking your mapping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shipping times change&lt;/strong&gt; when suppliers switch warehouses or lose logistics partnerships&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And then there's the remapping nightmare. You find a new supplier, open both listings side by side, and discover they live in completely different naming universes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Bad Is the AliExpress Naming Chaos, Really?
&lt;/h2&gt;

&lt;p&gt;If you've never dealt with this, let me show you what real AliExpress product listings actually look like. It's not just "Grey vs Gray" — it's way worse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Colors that aren't colors.&lt;/strong&gt; Suppliers routinely label the "Color" attribute with things that have nothing to do with color. You'll see model numbers (&lt;code&gt;K1-B Yellow&lt;/code&gt;), nonsense codes (&lt;code&gt;jiamgunshue&lt;/code&gt;, &lt;code&gt;xcvvb&lt;/code&gt;), or just bare numbers (&lt;code&gt;01&lt;/code&gt;, &lt;code&gt;02&lt;/code&gt;, &lt;code&gt;03&lt;/code&gt;). One supplier's "Color" dropdown is actually phone models. Another's is material types. The attribute name means nothing — you have to look at the images to figure out what each option actually is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Ships From" as a variant dimension.&lt;/strong&gt; Many AliExpress listings include the warehouse location as a product variant alongside color and size. So you get variants like &lt;code&gt;Red / M / China&lt;/code&gt;, &lt;code&gt;Red / M / France&lt;/code&gt;, &lt;code&gt;Red / M / US Warehouse&lt;/code&gt; — tripling your SKU count for what's effectively the same product. When you switch suppliers, the new one might ship from different warehouses, have fewer options, or not use "Ships From" at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimensions in every format imaginable.&lt;/strong&gt; One supplier writes &lt;code&gt;100*130cm&lt;/code&gt;. Another writes &lt;code&gt;1x1.3m&lt;/code&gt;. A third writes &lt;code&gt;100×130&lt;/code&gt;. A fourth writes &lt;code&gt;W100xH130cm&lt;/code&gt;. They're all the same size, but good luck pattern-matching that by hand across 40 variants. It gets worse with composite values: &lt;code&gt;4PC-32x42cm&lt;/code&gt; means "4 pieces, each 32×42cm" — but the next supplier writes &lt;code&gt;4pcs 32*42&lt;/code&gt; or &lt;code&gt;4片-32x42厘米&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quantities that look like sizes.&lt;/strong&gt; &lt;code&gt;30x40cm1pcs&lt;/code&gt; — is that a size or a quantity? It's both. &lt;code&gt;6PC-32x42cm&lt;/code&gt; — is "6" the variant count or part of a product code? These composite formats are everywhere in home goods, placemats, wall art, and craft supplies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mixed languages within the same listing.&lt;/strong&gt; Not just Chinese vs English across suppliers — within a single listing you'll find &lt;code&gt;Color: 红色/蓝色&lt;/code&gt;, or size options labeled &lt;code&gt;S码, M码, L码&lt;/code&gt; mixed with &lt;code&gt;XL, XXL&lt;/code&gt;. Some listings freely mix French (&lt;code&gt;couleur&lt;/code&gt;), German (&lt;code&gt;größe&lt;/code&gt;), and Russian (&lt;code&gt;цвет&lt;/code&gt;) in their option names depending on the seller's region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "reset product" death spiral.&lt;/strong&gt; DSers and other tools have a "Reset Product" function to re-sync with AliExpress. Sounds helpful — except it reverts any custom attribute names you've set back to the supplier's original chaos. Your carefully renamed "Color: Red, Blue, Green" becomes "Color: 01, 02, 03" again. And sometimes the reset introduces NEW attribute slugs that duplicate your existing ones, creating ghost variants that break your store's filters.&lt;/p&gt;

&lt;p&gt;This is the reality. Not a clean mapping of &lt;code&gt;Grey → Gray&lt;/code&gt;. It's &lt;code&gt;jiamgunshue → Red&lt;/code&gt;, &lt;code&gt;03 → Navy Blue&lt;/code&gt;, &lt;code&gt;4PC-32x42cm → 4pcs 32*42&lt;/code&gt;, and a "Color" dropdown that's actually phone models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That's&lt;/strong&gt; what sku-matcher is built to handle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Automated Pipeline
&lt;/h2&gt;

&lt;p&gt;I'm building a 4-step pipeline that turns supplier replacement from hours of manual work into a one-click confirmation:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Detect — Is something broken?
&lt;/h3&gt;

&lt;p&gt;Before you can fix anything, you need to know something broke. Three signals to watch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dead links&lt;/strong&gt; — supplier page returns 404 or redirects to a category page&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stock depletion&lt;/strong&gt; — all variants hit zero, or key sizes/colors are gone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Price spikes&lt;/strong&gt; — supplier raised prices beyond your margin threshold&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the trigger. When any of these fire, the pipeline kicks in automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Search — Find alternative suppliers
&lt;/h3&gt;

&lt;p&gt;This is where &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt; does the heavy lifting. It's an open-source MCP server with 12 tools and 4 prompts for dropshipping automation. The &lt;code&gt;dsers_find_product&lt;/code&gt; tool searches the AliExpress supplier catalog by keyword or — and this is the good part — &lt;strong&gt;by image&lt;/strong&gt;. Upload the product photo, get visually similar products back. Way more reliable than keyword search for finding the exact same product from a different seller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is DSers MCP Product?&lt;/strong&gt; It's an &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;open-source MCP server&lt;/a&gt; that connects AI agents (Claude, Cursor, Windsurf, etc.) to the DSers dropshipping platform. You can import products from AliExpress/Alibaba, set pricing rules, push to Shopify/Wix stores, manage inventory — all through natural language. It supports both local (npm/stdio) and remote (Streamable HTTP + OAuth) connections, and is listed on &lt;a href="https://www.npmjs.com/package/@lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;npm&lt;/a&gt;, &lt;a href="https://smithery.ai/server/@dsersx/product-mcp" rel="noopener noreferrer"&gt;Smithery&lt;/a&gt;, &lt;a href="https://registry.modelcontextprotocol.io/servers/io.github.lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;MCP Registry&lt;/a&gt;, and &lt;a href="https://glama.ai/mcp/servers/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;Glama.ai&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The search returns candidates with pricing, ratings, and shipping info. Filter by minimum stock, acceptable price range, ships-from country.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Match — Map SKU variants automatically
&lt;/h3&gt;

&lt;p&gt;This is the hard part, and where I spent most of the engineering time.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;sku-matcher&lt;/a&gt; — a pure-algorithm matching engine designed specifically to run inside MCP tool calls, not as a standalone service. No LLM, no ML model, no GPU, no external API dependency. Just deterministic rules that execute in milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why zero-model? Because it's built for MCP.&lt;/strong&gt; An MCP tool call needs to return a result fast — the AI agent is waiting. You can't send 40×40 variant pairs to an LLM for every candidate supplier. The matching has to be:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt; — 150ms for 100×100 variant pairs. An agent might evaluate 10 candidate suppliers in one conversation turn&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic&lt;/strong&gt; — same input, same output, every time. No temperature, no prompt drift&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight&lt;/strong&gt; — runs in the same Node.js process as the MCP server. Zero cold start, zero API cost, zero token burn&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offline-capable&lt;/strong&gt; — works without internet access after install. The synonym tables, unit conversion rules, and scoring logic are all local&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The engine handles the real-world naming chaos described above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your supplier              New supplier
────────────────────────   ──────────────────────────
Grey                    →  Gray
Navy Blue               →  Dark Blue
红色                    →  Red
jiamgunshue             →  (needs image match → Red)
03                      →  (opaque code → low confidence)
90 for 12-18M           →  90
Color                   →  颜色
Emitting Color          →  颜色
4pcs                    →  4 Pieces / 4片
100*130cm               →  1x1.3m
4PC-32x42cm             →  4pcs 32*42
W100xH200cm             →  100×200cm
Warm White              →  暖光 / 3000K
Red/M/China Mainland    →  Red/M  (Ships From filtered)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Three matching layers:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Text matching&lt;/strong&gt; — exact, normalized, synonym tables covering 5 languages (English, Chinese, French, German, Russian), substring, opaque-code detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit-aware matching&lt;/strong&gt; — parses and converts between measurement systems. &lt;code&gt;500g&lt;/code&gt; matches &lt;code&gt;0.5kg&lt;/code&gt;. &lt;code&gt;32x42cm&lt;/code&gt; matches &lt;code&gt;320x420mm&lt;/code&gt;. Composite values like &lt;code&gt;4PC-32x42cm&lt;/code&gt; split into count(4) + dimensions and match both parts independently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image matching&lt;/strong&gt; — dHash perceptual hashing for when text fails (e.g., supplier uses "Color: 1, 2, 3" instead of actual color names)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each variant gets scored 0-100. Above 50 = auto-match. 25-50 = needs human review. Below 25 = unmatched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A real matching example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Current: "Navy Blue-XL-China Mainland"
         ↓
Step 1: Align dimensions
  Color ↔ 颜色 (synonym)
  Size ↔ 尺码 (synonym)
  Ships From → filtered (logistics, not product attribute)

Step 2: Score values
  "Navy Blue" → synonym → "Dark Blue"    score: 25
  "XL" → exact → "XL"                    score: 40

Step 3: Normalize
  avg(25, 40) / 40 × 80 = 65

Step 4: Auxiliary signals
  +5 (price within 15%) = 70

Result: 70 ≥ 50 → auto match ✓
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Replace — Swap the mapping
&lt;/h3&gt;

&lt;p&gt;Once matching is done, the results look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"summary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"auto_matched"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"needs_review"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"unmatched"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"total"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;35 out of 40 variants matched automatically. 3 need a quick human check (usually opaque color codes where the algorithm isn't confident enough). 2 variants don't exist at the new supplier.&lt;/p&gt;

&lt;p&gt;The seller reviews the 3 uncertain matches, confirms or corrects them, and the mapping is applied. &lt;strong&gt;40 variants remapped in under a minute instead of hours.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Agents Tie It All Together
&lt;/h2&gt;

&lt;p&gt;Here's where MCP (Model Context Protocol) makes this powerful. An AI agent in Claude, Cursor, or any MCP-compatible client can orchestrate the entire flow with a single instruction:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Check my import list for products running low on stock and find replacements."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calls &lt;code&gt;dsers_import_list&lt;/code&gt; → identifies low-stock items&lt;/li&gt;
&lt;li&gt;Calls &lt;code&gt;dsers_find_product&lt;/code&gt; → searches alternatives by image&lt;/li&gt;
&lt;li&gt;Runs sku-matcher → maps variants with confidence scores&lt;/li&gt;
&lt;li&gt;Presents results → "35 auto-matched, 3 need your review"&lt;/li&gt;
&lt;li&gt;After confirmation → executes the swap via &lt;code&gt;dsers_supplier_replace&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The human stays in the loop for the final decision, but the tedious research and mapping work is fully automated. That's the promise of AI agents in e-commerce — not replacing human judgment, but eliminating the busywork around it.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Can sku-matcher handle products with 50+ variants?&lt;/strong&gt;&lt;br&gt;
Yes. The engine matches 100×100 variant pairs in under 150ms. It uses greedy assignment (highest score first, no reuse) so scaling is smooth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if the new supplier uses completely different option names?&lt;/strong&gt;&lt;br&gt;
The dimension alignment system handles this. "Color" maps to "颜色" via synonym tables. If names are completely novel, unit-family detection kicks in — if all values under "Capacity" are in ml/L, it aligns with any other option that has the same unit family.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does it work with Alibaba and 1688, not just AliExpress?&lt;/strong&gt;&lt;br&gt;
The matching engine itself is supplier-agnostic — it works on variant data structures. The DSers MCP tools (&lt;code&gt;dsers_find_product&lt;/code&gt;, &lt;code&gt;dsers_product_import&lt;/code&gt;) support AliExpress, Alibaba, and Accio.com sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is the matching engine open-source?&lt;/strong&gt;&lt;br&gt;
Yes. &lt;a href="https://github.com/lofder/sku-matcher" rel="noopener noreferrer"&gt;sku-matcher&lt;/a&gt; is open-source on GitHub. Single TypeScript file, ~1200 lines, 8 test scenarios. The DSers MCP integration will be part of &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why build it as a local algorithm instead of an LLM call?&lt;/strong&gt;&lt;br&gt;
Because it runs inside MCP tool calls. When an AI agent calls &lt;code&gt;dsers_supplier_match&lt;/code&gt;, the matching engine executes in the same Node.js process — no API roundtrip, no token cost, no latency. The agent can evaluate 10 candidate suppliers in a single conversation turn without burning through rate limits or racking up costs. Synonym tables, unit conversion, and scoring logic are all bundled locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How is this different from DSers' built-in supplier optimizer?&lt;/strong&gt;&lt;br&gt;
DSers' built-in tool suggests alternative suppliers but doesn't auto-map variants. You still remap manually. This pipeline closes the gap — from finding the replacement to applying the mapping, with variant-level confidence scoring.&lt;/p&gt;
&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm actively integrating sku-matcher into &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt;. The planned new tools:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;dsers_supplier_match&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run the matching engine against a candidate supplier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;dsers_supplier_replace&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Execute the variant mapping swap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;dsers_supplier_scan&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Batch-scan your catalog for stock/link/price problems and auto-find replacements&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Try DSers MCP Product today:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Local (stdio):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @lofder/dsers-mcp-product
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remote MCP (no install, just add the URL to your MCP client):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dropshipping"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://ai.silentrillmcp.com/dropshipping/mcp"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Available on: &lt;a href="https://www.npmjs.com/package/@lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;npm&lt;/a&gt; · &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; · &lt;a href="https://smithery.ai/server/@dsersx/product-mcp" rel="noopener noreferrer"&gt;Smithery&lt;/a&gt; · &lt;a href="https://registry.modelcontextprotocol.io/servers/io.github.lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;MCP Registry&lt;/a&gt; · &lt;a href="https://glama.ai/mcp/servers/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;Glama.ai&lt;/a&gt; · &lt;a href="https://apify.com/lofder/dropshipping-mcp-dsers" rel="noopener noreferrer"&gt;Apify&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explore the matching engine:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/lofder/sku-matcher.git
&lt;span class="nb"&gt;cd &lt;/span&gt;sku-matcher &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're building dropshipping automation or dealing with SKU mapping problems, I'd love to hear your use cases. Drop a comment or open an issue on GitHub.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Also in this series:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/_95a3e57463e6442feacd0/how-to-automate-aliexpress-to-shopify-product-import-with-ai-step-by-step-guide-3f5a"&gt;How to Automate AliExpress to Shopify Product Import with AI — Step-by-Step Guide&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product on GitHub&lt;/a&gt; — 12 tools + 4 prompts for dropshipping automation&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dropshipping</category>
      <category>ai</category>
      <category>ecommerce</category>
      <category>automation</category>
    </item>
    <item>
      <title>I Convinced DSers to Add OAuth 2.1 — Dropshipping MCP Server v1.4.0</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Fri, 03 Apr 2026 16:34:14 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/i-convinced-dsers-to-add-oauth-21-dropshipping-mcp-server-v140-1jm8</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/i-convinced-dsers-to-add-oauth-21-dropshipping-mcp-server-v140-1jm8</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; DSers MCP Product v1.4.0 — convinced DSers to add official OAuth 2.1, replaced 600 lines of browser hacking with 200 lines of clean auth. Added 3 new tools (browse imports, search suppliers, view store products). Shipped a hosted remote server at &lt;code&gt;ai.silentrillmcp.com&lt;/code&gt;. 12 tools, 298 tests, open source. &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build This?
&lt;/h2&gt;

&lt;p&gt;If you're doing dropshipping with DSers and Shopify, you know the routine — find a product on AliExpress, open DSers, click import, manually edit the title, set the markup, push to store, repeat fifty times. It's not hard, it's just slow.&lt;/p&gt;

&lt;p&gt;I built this MCP server so I could tell my AI agent "import these 10 products, mark them up 2.5x, clean up the titles, push to my store" and go do something else. It handles AliExpress, Alibaba, and Accio.com product links. Supports Shopify and Wix through DSers.&lt;/p&gt;

&lt;p&gt;The tool is free, open source, and works with Claude Desktop, Cursor, Windsurf, or any MCP-compatible client. You can run it locally via npm or connect to the hosted version without installing anything.&lt;/p&gt;

&lt;p&gt;For anyone evaluating dropshipping automation tools — this isn't a SaaS with a monthly fee. It's an MCP server you run yourself (or use hosted for free). No vendor lock-in, no subscription tiers. The code is on GitHub, MIT licensed.&lt;/p&gt;

&lt;p&gt;I've been building an open-source MCP server that automates dropshipping product imports — AliExpress to Shopify, through DSers. If you missed the earlier posts, the short version: paste a product link, tell your AI agent what markup to apply, and it handles the rest.&lt;/p&gt;

&lt;p&gt;But authentication has been my biggest headache since day one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Authentication Problem
&lt;/h2&gt;

&lt;p&gt;When I first built this tool, DSers didn't have an OAuth flow designed for third-party integrations. So I went with what I could — automating a browser login via Chrome DevTools Protocol to capture the session.&lt;/p&gt;

&lt;p&gt;It got the job done, but it wasn't pretty. I had to handle Chrome on Mac, Edge on Windows, Safari as a fallback, and a terminal prompt for headless servers. Four different login strategies, ~600 lines of browser automation code, and the whole thing depended on having a Chromium browser installed.&lt;/p&gt;

&lt;p&gt;And sessions expired every 6 hours. Users would be halfway through a bulk import, and suddenly — "session expired, please run login again."&lt;/p&gt;

&lt;p&gt;I knew there had to be a better way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting DSers on Board
&lt;/h2&gt;

&lt;p&gt;I reached out to the DSers team about building OAuth support for MCP integrations. They were receptive — the MCP ecosystem is growing fast, and they could see the value in letting AI tools connect properly instead of relying on browser session workarounds.&lt;/p&gt;

&lt;p&gt;We went through a few rounds of designing the scope model, endpoint structure, and token lifecycle. I built a proof-of-concept proxy to validate the flow before committing to a full integration. There were the usual bumps — some gateway routes needed configuring, a few scope-to-endpoint mappings to sort out — but the DSers team was responsive and we got it all working.&lt;/p&gt;

&lt;p&gt;Credit where it's due: they built a solid OAuth 2.1 authorization server with PKCE, dynamic client registration, and refresh tokens. Proper stuff.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Login Looks Like Now
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @lofder/dsers-mcp-product login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Browser opens. You click "Authorize" on DSers's own page. Done.&lt;/p&gt;

&lt;p&gt;Behind the scenes: PKCE code challenge, authorization code exchange, access token + refresh token saved to an encrypted local file. When the access token expires (every 2 hours), the refresh token renews it silently. No user interaction needed.&lt;/p&gt;

&lt;p&gt;I deleted all 600 lines of CDP code. The new OAuth module is about 200 lines. And it just works — no Chrome dependency, no cookie extraction, no platform-specific browser detection.&lt;/p&gt;

&lt;p&gt;The token file format is shared with DSClaw (our web app). If a user authorizes through DSClaw, the MCP server picks up the same token automatically. One login for everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Remote Server Is Live
&lt;/h2&gt;

&lt;p&gt;With OAuth working, I could finally ship a hosted MCP server that doesn't require any local installation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dropshipping"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://ai.silentrillmcp.com/dropshipping/mcp"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add that to your MCP client config and you're connected. No &lt;code&gt;npx&lt;/code&gt;, no Node.js, nothing to install. The server runs on Vercel, authenticates via the OAuth Bearer token, and isolates each user's data by extracting the &lt;code&gt;sub&lt;/code&gt; claim from the JWT.&lt;/p&gt;

&lt;p&gt;Getting this to work on Vercel had its own challenges — the &lt;code&gt;mcp-handler&lt;/code&gt; library initializes the MCP server before the request context is available, so the token wasn't ready when tools were being registered. Took a few iterations to land on the right pattern: create a fresh handler per request with the token baked in, cache only the job store across requests for the same user.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Tools for Browsing and Sourcing
&lt;/h2&gt;

&lt;p&gt;The original 9 tools covered the import-to-push pipeline. But there was an obvious gap: "what's already in my import list?" was unanswerable without opening the DSers website.&lt;/p&gt;

&lt;p&gt;v1.4.0 adds three new tools:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;dsers_import_list&lt;/code&gt;&lt;/strong&gt; pulls your staging list with enriched data — cost ranges, sell prices, markup status, and low stock warnings. Each item gets a separate API call for variant-level detail, running in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;dsers_my_products&lt;/code&gt;&lt;/strong&gt; shows what's already been pushed to your Shopify or Wix store, with supplier links for re-importing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;dsers_find_product&lt;/code&gt;&lt;/strong&gt; searches the DSers product pool by keyword or image. Each result includes an &lt;code&gt;import_url&lt;/code&gt; you can feed directly into &lt;code&gt;dsers_product_import&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The workflow is now: search → import → edit → push. All from your AI client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The codebase got a full restructure. &lt;code&gt;provider.ts&lt;/code&gt; was 1,700 lines — one file handling everything. A bad commit in v1.3.5 accidentally reverted 6 different bug fixes because it was all tangled together.&lt;/p&gt;

&lt;p&gt;Split into 6 modules under &lt;code&gt;provider/&lt;/code&gt; and 7 under &lt;code&gt;service/&lt;/code&gt;. Added ESLint + Prettier. Tests went from 195 to 298.&lt;/p&gt;

&lt;p&gt;Other fixes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;tags_add&lt;/code&gt; was validated but never actually written to the API — one line was hardcoded to &lt;code&gt;null&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compare_at_price&lt;/code&gt; inversions now auto-cleared instead of just warned about&lt;/li&gt;
&lt;li&gt;Replaced &lt;code&gt;execSync&lt;/code&gt; with &lt;code&gt;spawnSync&lt;/code&gt; for browser launch to prevent command injection&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;More product pool search filters (categories, price ranges, ship-from)&lt;/li&gt;
&lt;li&gt;A few DSers OAuth scope rules still being configured&lt;/li&gt;
&lt;li&gt;Order tracking tools (the Python version already has them)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open source: &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;github.com/lofder/dsers-mcp-product&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install: &lt;code&gt;npx @lofder/dsers-mcp-product&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Hosted: &lt;code&gt;https://ai.silentrillmcp.com/dropshipping/mcp&lt;/code&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>mcp</category>
      <category>dropshipping</category>
      <category>ai</category>
      <category>shopify</category>
    </item>
    <item>
      <title>Dropshipping Automation Tools Compared: DSers MCP vs AliDropify vs AutoDS vs AeroDrop (2026)</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Thu, 02 Apr 2026 07:33:48 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/dropshipping-automation-tools-compared-dsers-mcp-vs-alidropify-vs-autods-vs-aerodrop-2026-4khj</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/dropshipping-automation-tools-compared-dsers-mcp-vs-alidropify-vs-autods-vs-aerodrop-2026-4khj</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; There are two fundamentally different approaches to dropshipping automation in 2026: traditional web-based tools (AliDropify, AutoDS, AeroDrop) that give you a dashboard with buttons to click, and AI-native tools (&lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;DSers MCP Product&lt;/a&gt;) that let you automate workflows through conversation with an AI agent. This post breaks down the features, pricing, and trade-offs of each approach.&lt;/p&gt;




&lt;p&gt;If you're running a dropshipping store in 2026, you've probably Googled "best dropshipping automation tool" at least once this week. The options are overwhelming — every tool claims to automate everything, save you hours, and practically run your store for you.&lt;/p&gt;

&lt;p&gt;I've used several of these tools over the past couple of years. Some are genuinely good. Some are expensive for what they do. And one category is completely new — AI-native tools that don't have a dashboard at all.&lt;/p&gt;

&lt;p&gt;Here's an honest comparison of the four tools I've actually used or evaluated in depth.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contenders
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Starting Price&lt;/th&gt;
&lt;th&gt;Product Sources&lt;/th&gt;
&lt;th&gt;Platforms&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;DSers MCP Product&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AI-native (MCP server)&lt;/td&gt;
&lt;td&gt;Free (open-source)&lt;/td&gt;
&lt;td&gt;AliExpress, Alibaba, 1688, Accio&lt;/td&gt;
&lt;td&gt;Shopify, Wix&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AliDropify&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Web dashboard + Chrome extension&lt;/td&gt;
&lt;td&gt;$39.99/mo&lt;/td&gt;
&lt;td&gt;AliExpress, Alibaba, Temu, Shein&lt;/td&gt;
&lt;td&gt;Shopify&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AutoDS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Web dashboard + automation suite&lt;/td&gt;
&lt;td&gt;$26.90/mo&lt;/td&gt;
&lt;td&gt;40+ suppliers&lt;/td&gt;
&lt;td&gt;Shopify, eBay, Amazon, Wix&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AeroDrop&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Shopify app&lt;/td&gt;
&lt;td&gt;Free (limited) / $18/mo&lt;/td&gt;
&lt;td&gt;AliExpress&lt;/td&gt;
&lt;td&gt;Shopify&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How They Work (The Fundamental Difference)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Traditional tools: You click buttons
&lt;/h3&gt;

&lt;p&gt;AliDropify, AutoDS, and AeroDrop all follow the same model: you open their website or Shopify app, browse products, click "import," adjust settings through their UI, and click "push to store."&lt;/p&gt;

&lt;p&gt;This works. It's visual, it's intuitive, and most dropshippers are comfortable with it. But you're still doing the work manually — just in a more efficient interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-native: You describe what you want
&lt;/h3&gt;

&lt;p&gt;DSers MCP Product takes a completely different approach. There's no dashboard. Instead, you connect it to an AI client (Cursor, Claude Desktop) and tell your AI agent what to do in plain English:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Import this product from AliExpress, mark up 2.5x, rewrite the title for SEO, and push to my US store as a draft."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI agent calls the right tools in the right order, shows you previews, asks for confirmation, and executes. You're describing outcomes instead of clicking through steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Comparison
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Product Import
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;DSers MCP&lt;/th&gt;
&lt;th&gt;AliDropify&lt;/th&gt;
&lt;th&gt;AutoDS&lt;/th&gt;
&lt;th&gt;AeroDrop&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;One-click import&lt;/td&gt;
&lt;td&gt;Via conversation&lt;/td&gt;
&lt;td&gt;Chrome extension&lt;/td&gt;
&lt;td&gt;Dashboard&lt;/td&gt;
&lt;td&gt;Shopify app&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bulk import&lt;/td&gt;
&lt;td&gt;Yes (unlimited URLs)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes (plan limits)&lt;/td&gt;
&lt;td&gt;Yes (plan limits)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AliExpress&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alibaba&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1688&lt;/td&gt;
&lt;td&gt;Yes (with DSers auth)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Temu / Shein&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; AutoDS wins on supplier breadth (40+ sources). DSers MCP wins on depth for AliExpress/Alibaba workflows. AliDropify covers the most consumer platforms (Temu, Shein).&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing Rules
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;DSers MCP&lt;/th&gt;
&lt;th&gt;AliDropify&lt;/th&gt;
&lt;th&gt;AutoDS&lt;/th&gt;
&lt;th&gt;AeroDrop&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Markup multiplier&lt;/td&gt;
&lt;td&gt;Yes (e.g. 2.5x)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fixed markup&lt;/td&gt;
&lt;td&gt;Yes (e.g. +$5)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compare-at / sale price&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-variant pricing&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pricing conflict detection&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; DSers MCP and AutoDS are the most flexible. DSers MCP's pricing conflict detection is unique — it blocks pushes when your MCP rules conflict with DSers store-level pricing rules, preventing silent price overrides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Store Push
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;DSers MCP&lt;/th&gt;
&lt;th&gt;AliDropify&lt;/th&gt;
&lt;th&gt;AutoDS&lt;/th&gt;
&lt;th&gt;AeroDrop&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Shopify&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wix&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WooCommerce&lt;/td&gt;
&lt;td&gt;Via DSers&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;eBay&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-store push&lt;/td&gt;
&lt;td&gt;Yes (one command)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pre-push safety checks&lt;/td&gt;
&lt;td&gt;Yes (auto-blocks below-cost)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; AutoDS has the most platform coverage. DSers MCP has the best safety nets — it automatically blocks zero-price, below-cost, and zero-stock pushes.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI &amp;amp; SEO
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;DSers MCP&lt;/th&gt;
&lt;th&gt;AliDropify&lt;/th&gt;
&lt;th&gt;AutoDS&lt;/th&gt;
&lt;th&gt;AeroDrop&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AI title rewriting&lt;/td&gt;
&lt;td&gt;Yes (via AI agent)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (AI feature)&lt;/td&gt;
&lt;td&gt;Yes (Premium)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI description rewriting&lt;/td&gt;
&lt;td&gt;Yes (via AI agent)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (AI feature)&lt;/td&gt;
&lt;td&gt;Yes (Premium)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Title cleanup&lt;/td&gt;
&lt;td&gt;Yes (built-in)&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Custom content rules&lt;/td&gt;
&lt;td&gt;Yes (prefix, suffix, keep_first_n images)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; DSers MCP and AutoDS both offer AI content features. The difference is that DSers MCP uses your own AI client (Claude, GPT) so the quality depends on the model you're using — and you can give it specific instructions like "write this title for the US hiking market." AutoDS uses its built-in AI which is more limited but requires no setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Breakdown
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;DSers MCP&lt;/th&gt;
&lt;th&gt;AliDropify&lt;/th&gt;
&lt;th&gt;AutoDS&lt;/th&gt;
&lt;th&gt;AeroDrop&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Free tier&lt;/td&gt;
&lt;td&gt;Yes (fully free, open-source)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (500 products, 200 orders)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Entry paid&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;$39.99/mo&lt;/td&gt;
&lt;td&gt;$26.90/mo&lt;/td&gt;
&lt;td&gt;$18/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mid tier&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;$99.99/mo&lt;/td&gt;
&lt;td&gt;$66.90/mo&lt;/td&gt;
&lt;td&gt;$48/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-order fees&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;$0.30-0.50/order (auto-fulfillment)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;DSers MCP is completely free.&lt;/strong&gt; MIT license, no subscription tiers, no per-order fees, no usage limits. The only requirement is a DSers account (which also has a free plan supporting 3,000 products and 3 stores).&lt;/p&gt;

&lt;p&gt;AeroDrop's free tier is decent for testing but caps at 200 orders/month. AliDropify is the most expensive entry point at $40/mo. AutoDS has hidden costs — the auto-fulfillment credits ($0.30-0.50 per order) add up fast on high-volume stores.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;DSers MCP&lt;/th&gt;
&lt;th&gt;AliDropify&lt;/th&gt;
&lt;th&gt;AutoDS&lt;/th&gt;
&lt;th&gt;AeroDrop&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Password in config file&lt;/td&gt;
&lt;td&gt;No (zero-password login)&lt;/td&gt;
&lt;td&gt;N/A (web login)&lt;/td&gt;
&lt;td&gt;N/A (web login)&lt;/td&gt;
&lt;td&gt;N/A (Shopify OAuth)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open-source (auditable)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security scan score&lt;/td&gt;
&lt;td&gt;92/100 (SafeSkill)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;DSers MCP is the only tool in this list where you can read every line of code that touches your data. Authentication uses browser-based login — your password never touches the tool. I wrote a &lt;a href="https://dev.to/_95a3e57463e6442feacd0/your-mcp-server-shouldnt-need-your-password-3od9"&gt;detailed post about the security model&lt;/a&gt; if you're interested.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Use What
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use DSers MCP Product if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You already use Cursor, Claude Desktop, or another MCP client&lt;/li&gt;
&lt;li&gt;You want full automation through conversation, not clicking&lt;/li&gt;
&lt;li&gt;You want a free, open-source tool with no usage limits&lt;/li&gt;
&lt;li&gt;You care about code transparency and security&lt;/li&gt;
&lt;li&gt;Your main sources are AliExpress and Alibaba&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use AutoDS if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You sell on eBay or Amazon (not just Shopify)&lt;/li&gt;
&lt;li&gt;You need 40+ supplier sources&lt;/li&gt;
&lt;li&gt;You want a fully managed solution and don't mind paying&lt;/li&gt;
&lt;li&gt;You don't use AI coding tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use AliDropify if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You import from Temu or Shein (not just AliExpress)&lt;/li&gt;
&lt;li&gt;You prefer a Chrome extension workflow&lt;/li&gt;
&lt;li&gt;You're on Shopify only&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use AeroDrop if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're just starting out and want a free Shopify app&lt;/li&gt;
&lt;li&gt;Your volume is under 200 orders/month&lt;/li&gt;
&lt;li&gt;You want the simplest possible setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;These tools aren't really competing with each other — they represent two different eras of e-commerce tooling.&lt;/p&gt;

&lt;p&gt;Traditional tools (AliDropify, AutoDS, AeroDrop) are mature, feature-rich, and designed for people who want a visual dashboard. They're better at order fulfillment, inventory sync, and platform breadth.&lt;/p&gt;

&lt;p&gt;AI-native tools (DSers MCP Product) are new, focused on the import-to-push workflow, and designed for people who are already using AI assistants daily. They're better at flexible automation, custom rules, and conversational workflows.&lt;/p&gt;

&lt;p&gt;My prediction: within a year, every major dropshipping tool will have an MCP server or similar AI integration. The tools that figure this out first will have a significant advantage.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DSers MCP Product: &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;github.com/lofder/dsers-mcp-product&lt;/a&gt; (free, open-source)&lt;/li&gt;
&lt;li&gt;Tutorial: &lt;a href="https://dev.to/_95a3e57463e6442feacd0/how-to-automate-aliexpress-to-shopify-product-import-with-ai-step-by-step-guide-3f5a"&gt;How to Automate AliExpress to Shopify with AI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Technical deep-dive: &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;I Built an MCP Server to Automate Dropshipping Product Imports&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dropshipping</category>
      <category>ai</category>
      <category>shopify</category>
      <category>ecommerce</category>
    </item>
    <item>
      <title>How to Automate AliExpress to Shopify Product Import with AI (Step-by-Step Guide)</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Wed, 01 Apr 2026 10:17:16 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/how-to-automate-aliexpress-to-shopify-product-import-with-ai-step-by-step-guide-3f5a</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/how-to-automate-aliexpress-to-shopify-product-import-with-ai-step-by-step-guide-3f5a</guid>
      <description>&lt;p&gt;If you run a dropshipping store, you already know the pain: find a product on AliExpress, copy the title, download the images, clean up the description, set your margins, push it to Shopify. Do that 20 times and your evening is gone.&lt;/p&gt;

&lt;p&gt;I got tired of doing this manually, so I built a tool that lets an AI agent handle the whole thing. You type one sentence — "import this product, mark it up 2.5x, push to my store" — and it just happens. No scripts, no browser extensions, no Zapier glue.&lt;/p&gt;

&lt;p&gt;This guide walks you through setting it up. It takes about 5 minutes, and you don't need to write any code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; You can automate AliExpress-to-Shopify product imports using a free, open-source tool called &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt;. Install with &lt;code&gt;npx @lofder/dsers-mcp-product login&lt;/code&gt;, add a 3-line JSON config to Cursor or Claude Desktop, and start importing products with plain English commands like "import this product, mark up 2.5x, push to my store." No coding needed, no passwords in config files.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Need
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;&lt;a href="https://www.dsers.com/" rel="noopener noreferrer"&gt;DSers&lt;/a&gt; account&lt;/strong&gt; (free plan works) with at least one Shopify or Wix store connected&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://cursor.sh/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;a href="https://claude.ai/desktop" rel="noopener noreferrer"&gt;Claude Desktop&lt;/a&gt;&lt;/strong&gt; — these are AI coding assistants that support MCP (Model Context Protocol), the standard this tool uses to talk to AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js 22 or later&lt;/strong&gt; — if you don't have it, grab it from &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;nodejs.org&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What you don't need: coding experience, API keys, or any paid subscription beyond what you already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Log In (One Time, 30 Seconds)
&lt;/h2&gt;

&lt;p&gt;Open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @lofder/dsers-mcp-product login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your browser opens to the official DSers login page. Log in the way you normally do. That's it — the tool picks up your session automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your password never touches this tool.&lt;/strong&gt; You're logging in on DSers's own website. The MCP server only receives a session token, which is encrypted and stored locally on your machine. No passwords in any config file, ever.&lt;/p&gt;

&lt;p&gt;Sessions last about 6 hours. When one expires, your AI agent will tell you and ask you to run &lt;code&gt;login&lt;/code&gt; again. Takes 10 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Connect to Your AI Client
&lt;/h2&gt;

&lt;h3&gt;
  
  
  If you use Cursor
&lt;/h3&gt;

&lt;p&gt;Add this to your &lt;code&gt;.cursor/mcp.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@lofder/dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  If you use Claude Desktop
&lt;/h3&gt;

&lt;p&gt;Add to your &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@lofder/dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart your client. You should see "dsers-mcp-product" appear in the connected tools list. That's the setup — everything else happens in conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Import Your First Product
&lt;/h2&gt;

&lt;p&gt;Open a chat with your AI agent and type something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Import this product and push it to my Shopify store as a draft: &lt;a href="https://www.aliexpress.com/item/1005006372921430.html" rel="noopener noreferrer"&gt;https://www.aliexpress.com/item/1005006372921430.html&lt;/a&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Behind the scenes, the agent will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discover&lt;/strong&gt; your connected stores (which Shopify/Wix shops you have)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Import&lt;/strong&gt; the product from AliExpress into your DSers workspace&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preview&lt;/strong&gt; the result — showing you the title, price, variants, and images&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push&lt;/strong&gt; it to your store as a draft listing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You'll see the preview before anything gets pushed. The agent asks for your confirmation at each step, so nothing happens without your say-so.&lt;/p&gt;

&lt;p&gt;Works with product links from &lt;strong&gt;AliExpress&lt;/strong&gt;, &lt;strong&gt;Alibaba.com&lt;/strong&gt;, and &lt;strong&gt;&lt;a href="https://www.accio.com/" rel="noopener noreferrer"&gt;Accio.com&lt;/a&gt;&lt;/strong&gt; (Alibaba's AI sourcing tool). Just paste the URL and the agent figures out the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Bulk Import with Pricing Rules
&lt;/h2&gt;

&lt;p&gt;One product is nice. But the real power is batching. Try this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Import these 5 products, mark up the price by 2.5x, and push them all as drafts: [URL1] [URL2] [URL3] [URL4] [URL5]"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent imports all five, applies a 2.5x markup to every variant, and pushes them in one go.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing options you can use
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiplier&lt;/strong&gt;: "mark up 2.5x" — sell price = supplier cost × 2.5&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed markup&lt;/strong&gt;: "add $5 to each variant" — sell price = supplier cost + $5&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compare-at price&lt;/strong&gt;: "set compare-at to 3x supplier cost" — shows a crossed-out "original" price on your store&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Combination&lt;/strong&gt;: "2x markup with compare-at at 3x" — both at once&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also mix it into natural language:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Import this product. Set the sell price to 2x supplier cost, add a compare-at price at $29.99, and only keep the first 5 images."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent understands all of these and translates them into the right tool parameters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Push to Multiple Stores
&lt;/h2&gt;

&lt;p&gt;If you run stores in different markets, this saves a lot of time:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Push this product to all my connected stores"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent calls &lt;code&gt;store.discover&lt;/code&gt; to find every Shopify and Wix store linked to your DSers account, then pushes to each one. You can also be selective:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Push to my US store with 2.5x markup and my EU store with 3x markup"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Per-store pricing, per-store visibility — one conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: SEO Before You Push
&lt;/h2&gt;

&lt;p&gt;AliExpress product titles are usually keyword-stuffed garbage. Before pushing, you can ask the AI to clean things up:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Import this product, rewrite the title and description for SEO, then push to my store"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import the product&lt;/li&gt;
&lt;li&gt;Generate a clean, search-friendly title and description&lt;/li&gt;
&lt;li&gt;Show you the rewritten version for approval&lt;/li&gt;
&lt;li&gt;Push after you confirm&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You stay in control — it won't publish anything without showing you first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safety Nets Built In
&lt;/h2&gt;

&lt;p&gt;One thing that kept me up at night with manual imports was accidentally pushing a product with wrong pricing. The tool has automatic safety checks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hard blocks&lt;/strong&gt; (push won't go through):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Selling below supplier cost&lt;/li&gt;
&lt;li&gt;Zero sell price on any variant&lt;/li&gt;
&lt;li&gt;All variants out of stock&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Warnings&lt;/strong&gt; (push goes through, but you get a heads-up):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Profit margin below 10%&lt;/li&gt;
&lt;li&gt;Stock under 5 units&lt;/li&gt;
&lt;li&gt;Sell price under $1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pricing rule conflict detection&lt;/strong&gt;: If your DSers store already has its own pricing rule enabled (basic/standard/advanced), and you also set pricing through the MCP tool, the push gets blocked — not just warned. The agent shows you two options: accept the store's pricing rule, or disable it in DSers settings to use your MCP pricing instead. No more silent price overrides.&lt;/p&gt;

&lt;p&gt;If you're 100% sure about an edge case, you can tell the agent to &lt;code&gt;force_push&lt;/code&gt;. But the default is to protect you from mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Under the Hood
&lt;/h2&gt;

&lt;p&gt;The tool is called &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt; — it's open source, built in TypeScript, and uses 9 specialized tools and 4 workflow prompts to handle the full import-to-push pipeline. It's published on npm, the official MCP Registry, Smithery, Glama (AAA rated), and several other directories.&lt;/p&gt;

&lt;p&gt;If you're curious about the architecture and the decisions behind the tool design, I wrote a separate technical post: &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;I Built an MCP Server to Automate Dropshipping Product Imports&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Install:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @lofder/dsers-mcp-product login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Config (Cursor / Claude Desktop):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@lofder/dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Common commands:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What you want&lt;/th&gt;
&lt;th&gt;What to say&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Import one product&lt;/td&gt;
&lt;td&gt;"Import this product: [URL]"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Import with markup&lt;/td&gt;
&lt;td&gt;"Import this, mark up 2.5x: [URL]"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bulk import&lt;/td&gt;
&lt;td&gt;"Import these products: [URL1] [URL2] [URL3]"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Push to store&lt;/td&gt;
&lt;td&gt;"Push this product to my Shopify store"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Push to all stores&lt;/td&gt;
&lt;td&gt;"Push to all my stores"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SEO rewrite&lt;/td&gt;
&lt;td&gt;"Rewrite the title for SEO, then push"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Clean up titles&lt;/td&gt;
&lt;td&gt;"Clean up the AliExpress title"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Check push status&lt;/td&gt;
&lt;td&gt;"Did that push go through?"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;github.com/lofder/dsers-mcp-product&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/@lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;npmjs.com/package/@lofder/dsers-mcp-product&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;MCP Registry: &lt;a href="https://registry.modelcontextprotocol.io/servers/io.github.lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;registry.modelcontextprotocol.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you run into any issues, &lt;a href="https://github.com/lofder/dsers-mcp-product/issues" rel="noopener noreferrer"&gt;open a GitHub issue&lt;/a&gt; — I'm actively fixing bugs and adding features. This is still early days, and feedback from actual store owners is what makes it better.&lt;/p&gt;

</description>
      <category>dropshipping</category>
      <category>ai</category>
      <category>shopify</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Your MCP Server Shouldn't Need Your Password</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Wed, 25 Mar 2026 13:54:44 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/your-mcp-server-shouldnt-need-your-password-3od9</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/your-mcp-server-shouldnt-need-your-password-3od9</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; MCP servers shouldn't require users to put plaintext passwords in config files. &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt; uses browser-based zero-password login — you authenticate on the official DSers website, and the tool picks up an encrypted session token. Your password never touches the MCP server.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Awkward Config Block
&lt;/h2&gt;

&lt;p&gt;If you've set up any MCP server that connects to a third-party service, you've probably seen something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"some-tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SERVICE_EMAIL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"you@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SERVICE_PASSWORD"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hunter2"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your password. In a JSON file. On your disk. Possibly synced to a dotfiles repo.&lt;/p&gt;

&lt;p&gt;I shipped exactly this in v1.0 of &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt;, my MCP server for automating &lt;a href="https://www.dsers.com/" rel="noopener noreferrer"&gt;DSers&lt;/a&gt; dropshipping imports. (If you missed the backstory, I wrote about &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;building it from scratch here&lt;/a&gt;.) It worked fine. Import from AliExpress, apply pricing rules, push to Shopify — all through Claude or Cursor.&lt;/p&gt;

&lt;p&gt;But every time someone set it up, that &lt;code&gt;DSERS_PASSWORD&lt;/code&gt; line bothered me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It Matters More Than You Think
&lt;/h2&gt;

&lt;p&gt;MCP servers run locally. They have access to your filesystem, your environment variables, your network. When you put a plaintext password in a config file, you're trusting every other tool on your machine, every extension in your editor, every sync service that touches that directory.&lt;/p&gt;

&lt;p&gt;For a tool that manages your store inventory and can push products to live Shopify stores, that's not a risk I wanted users to carry.&lt;/p&gt;

&lt;p&gt;Some MCP servers solve this with API keys or tokens that have limited scope. DSers doesn't offer that for individual users. Your DSers credentials are your DSers credentials. So I needed a different approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browser Login: Let DSers Handle Authentication
&lt;/h2&gt;

&lt;p&gt;The fix in v1.1.5 was to get out of the credential chain entirely:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @lofder/dsers-mcp-product login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens your default browser to the DSers login page. You log in normally — with whatever method you use (email/password, Google, whatever). The tool captures the session cookie through a local callback, encrypts it with AES-256-GCM, and stores it on disk.&lt;/p&gt;

&lt;p&gt;Your password never touches the MCP server. Never appears in a config file. Never gets logged.&lt;/p&gt;

&lt;p&gt;The MCP config becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@lofder/dsers-mcp-product"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No &lt;code&gt;env&lt;/code&gt; block. No credentials.&lt;/p&gt;

&lt;p&gt;The session lasts roughly 6 hours. When it expires, the tool prompts you to re-login. Could be smoother, but it's a lot better than a plaintext password.&lt;/p&gt;

&lt;h2&gt;
  
  
  Push Guards: Because AI Makes Mistakes
&lt;/h2&gt;

&lt;p&gt;While I was rethinking security, I also added something I'd been meaning to build: pre-push safety checks.&lt;/p&gt;

&lt;p&gt;Before any product gets pushed to your store, the server now automatically blocks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero-price products&lt;/strong&gt; — variants with $0.00 price&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Below-cost pricing&lt;/strong&gt; — sell price lower than supplier cost&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero-stock items&lt;/strong&gt; — out-of-stock variants about to go live&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These checks run before the push reaches DSers. The AI gets a clear error it can act on — fix the pricing, drop the variant, or ask you what to do.&lt;/p&gt;

&lt;p&gt;Here's a real scenario: I imported a product with 12 variants. Two had a supplier cost of $8.50 but the pricing rule somehow set the sell price at $6.00. Without push guard, those would have gone live on Shopify at a loss. With it, the push gets blocked and the model sees:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;Error:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Below-cost pricing"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Cause:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2 variants priced below supplier cost"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Action:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Review pricing rule or exclude variants"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't about not trusting the AI. It's about not trusting the data. Supplier feeds have garbage in them all the time — prices that lag behind, stock counts frozen at zero, variants with placeholder data. A human would catch these by eyeballing the preview. The push guard catches them automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed Since v1.0
&lt;/h2&gt;

&lt;p&gt;If you read the &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;first post&lt;/a&gt;, the core workflow is the same — tell your AI to import a product, set pricing, push to Shopify or Wix. But v1.1.5 added a few things beyond the login change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Push guards&lt;/strong&gt; (covered above)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accio.com support&lt;/strong&gt; — same import flow, one more supplier source alongside AliExpress and Alibaba&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A fourth prompt&lt;/strong&gt; (&lt;code&gt;seo-optimize&lt;/code&gt;) for AI-rewriting titles and descriptions before push&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stock data in previews&lt;/strong&gt; — so you can see inventory before committing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better error messages&lt;/strong&gt; — clearer cause descriptions for import failures, JSON validation guidance, transient push state handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CLI &lt;code&gt;--help&lt;/code&gt; flag&lt;/strong&gt; — &lt;code&gt;npx @lofder/dsers-mcp-product --help&lt;/code&gt; now works&lt;/li&gt;
&lt;li&gt;Prompts from 3 → 4, tools still at 7, now on 9 platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# One-time login (opens browser)&lt;/span&gt;
npx @lofder/dsers-mcp-product login

&lt;span class="c"&gt;# Add to your MCP client — no password needed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub: &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;github.com/lofder/dsers-mcp-product&lt;/a&gt;&lt;br&gt;
npm: &lt;code&gt;@lofder/dsers-mcp-product&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Free, open source, MIT license. If you're running dropshipping stores and using an MCP client, give it a spin. Issues and feedback welcome.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Previously: &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;I Built an MCP Server to Automate Dropshipping Product Imports&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>security</category>
      <category>dropshipping</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Your AI Agent Has Amnesia — Here's How to Fix It (MCP + Mem0 + Qdrant)</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Tue, 24 Mar 2026 15:12:53 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/your-ai-agent-has-amnesia-heres-how-to-fix-it-mcp-mem0-qdrant-4905</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/your-ai-agent-has-amnesia-heres-how-to-fix-it-mcp-mem0-qdrant-4905</guid>
      <description>&lt;p&gt;Every AI agent you have ever built forgets everything the moment the conversation ends.&lt;/p&gt;

&lt;p&gt;I run a fleet of AI agents on Feishu (think of it as the Chinese Slack) and Telegram. A main orchestrator, a devops agent, a content writer, half a dozen specialized workers. One day my content agent asked me, for the fifth time, what writing style I preferred. The devops agent had no idea we had already debugged the same DNS issue last week. Every morning, each agent woke up as a blank slate.&lt;/p&gt;

&lt;p&gt;That is not a feature. It is a bug. So I built &lt;a href="https://github.com/lofder/smart-memory-gateway" rel="noopener noreferrer"&gt;smart-memory-gateway&lt;/a&gt; to fix it.&lt;/p&gt;

&lt;p&gt;This article is a conceptual deep-dive into the architecture decisions behind it. You do not need to know what MCP or Mem0 is. You just need to have felt the pain of stateless agents.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Memory Hierarchy Your Agent Is Missing
&lt;/h2&gt;

&lt;p&gt;If you have taken a computer architecture class, you know the CPU cache hierarchy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Access speed    +-----------+    Capacity
  fastest       |  L1 Cache |    smallest
                +-----------+
                |  L2 Cache |
                +-----------+
                |  L3 Cache |
                +-----------+
                |    RAM    |
                +-----------+
                |   Disk    |    largest
  slowest       +-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AI agents have an equivalent hierarchy, but most developers only build two of the three layers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Layer    | Analogy   | What It Is                        | Properties
---------|-----------|-----------------------------------|--------------------
L1       | L1 Cache  | Context window (conversation)     | Fast, small, gone when chat ends
L2       | L2 Cache  | Persistent semantic memory         | THIS IS THE MISSING PIECE
L3       | Disk      | Files, databases, wikis           | Slow to search, huge, unstructured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;L1&lt;/strong&gt; is what every agent already has: the conversation history. It is fast and relevant, but it vanishes when the session ends (or when the context window fills up and older messages get evicted).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L3&lt;/strong&gt; is what people reach for first when they want "memory": dump everything into a vector database, a folder of markdown files, or a RAG pipeline over your documents. It works for reference material, but it is cold storage. Searching for "what does the user prefer" across thousands of documents is slow and imprecise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L2&lt;/strong&gt; -- persistent semantic memory -- is what sits between them. It stores extracted facts, preferences, lessons, and decisions. It is small enough to search quickly, structured enough to filter precisely, and persistent across every conversation. This is what smart-memory-gateway provides.&lt;/p&gt;

&lt;p&gt;The key insight: your agent does not need to remember every word of every conversation. It needs to remember the &lt;em&gt;conclusions&lt;/em&gt; -- the user prefers concise writing, the production database is on port 5433, we decided to use Redis for caching last Tuesday.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Scope Matters More Than You Think
&lt;/h2&gt;

&lt;p&gt;Here is a concrete problem. I run two agents on the same Feishu platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;devops agent&lt;/strong&gt; that manages servers, monitors logs, and runs deployments&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;content agent&lt;/strong&gt; that drafts articles, manages social media, and writes ad copy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without scope isolation, here is what actually happened: the content agent started referring to "the cluster" in marketing copy. The devops agent once suggested we should "A/B test the nginx configuration." Their memories had bled into each other.&lt;/p&gt;

&lt;p&gt;The solution is a 4-scope model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                        +-----------+
                        |  global   |  Shared across all agents and chats
                        +-----+-----+  (user facts, cross-cutting preferences)
                              |
              +---------------+---------------+
              |               |               |
        +-----+-----+  +-----+-----+  +-----+-----+
        | group:oc01 |  | group:oc02 |  |    dm     |  Per-chat / per-group
        +-----+-----+  +-----+-----+  +-----+-----+  (project context, group decisions)
              |               |               |
        +-----+-----+  +-----+-----+  +-----+-----+
        | agent:devops  | agent:writer  | agent:...  |  Per-agent private
        +-----+-----+  +-----+-----+  +-----+-----+  (agent-specific procedures)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every memory is tagged with exactly one scope at write time. Every search merges results from the current scope &lt;em&gt;and&lt;/em&gt; global, so agents always have access to cross-cutting knowledge (like user preferences) without seeing each other's operational details.&lt;/p&gt;

&lt;p&gt;The config makes this concrete. Here is the permission model from the actual &lt;code&gt;config.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;agents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;read&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;group:*"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;dm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent:*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;group:*"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;dm&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;allowed_types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;preference&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;fact&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;procedure&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;lesson&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;decision&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;task_log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;devops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;read&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;group:*"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;dm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent:*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;group:*"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;dm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent:*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;allowed_types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;preference&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;fact&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;procedure&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;lesson&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;decision&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;task_log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;writer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;read&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
    &lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent:writer"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;allowed_types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;procedure&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;task_log&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;default_agent_policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;read&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
  &lt;span class="na"&gt;allowed_types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main orchestrator can read and write broadly. Specialized workers like &lt;code&gt;writer&lt;/code&gt; can only write to their own private scope. And by default, unknown agents get read-only access to global memory and cannot write at all.&lt;/p&gt;

&lt;p&gt;This is not just about data hygiene. It is about trust boundaries. If an agent gets compromised or hallucinates, the blast radius is contained.&lt;/p&gt;




&lt;h2&gt;
  
  
  The MCP Interface: Why Not Just a REST API?
&lt;/h2&gt;

&lt;p&gt;MCP stands for &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; -- it is an open standard for connecting AI agents to tools and data sources. You might be wondering: why not just expose memory as a REST API?&lt;/p&gt;

&lt;p&gt;The answer is ergonomics. With a REST API, you need to write custom integration code in every agent: HTTP client setup, authentication, response parsing, error handling. With MCP, your agent calls memory tools the exact same way it calls &lt;em&gt;any other tool&lt;/em&gt; -- file operations, web search, code execution. The model already knows how to use tools. Memory becomes just another tool in the toolbox.&lt;/p&gt;

&lt;p&gt;smart-memory-gateway exposes 5 tools through FastMCP:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mem0_add&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Store a memory with scope, type, trust level, provenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mem0_search&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Semantic search with scope isolation and dual-query merge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mem0_get_all&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List all memories, optionally filtered by scope&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mem0_status&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Health check: memory counts, scope distribution, backend status&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mem0_maintenance&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Trigger daily/weekly maintenance cycles&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here is what a real conversation flow looks like. When my agent starts a new chat, it does not ask "what do you like?" again. It searches memory first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User: Help me write a product description for the new headphones.

Agent (internal): call mem0_search(
    query="user writing style preferences",
    scope="group:oc_marketing",
    limit=5
)

Memory returns:
  - "User prefers concise, no-fluff writing style" (trust: high)
  - "Product descriptions should lead with the benefit, not specs" (trust: medium)
  - "Avoid exclamation marks in copy" (trust: high)

Agent: Here's a draft in your preferred concise style,
       leading with the benefit...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And after the conversation, the agent stores the conclusion, not the whole chat:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nc"&gt;Agent &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;internal&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt; &lt;span class="nf"&gt;mem0_add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Headphone product line uses the tagline format:
             [benefit] + [one technical proof point]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;group:oc_marketing&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;mem_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;decision&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_approved&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;trust&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also built a &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;separate MCP server for dropshipping product imports&lt;/a&gt; that consumes this memory as a client. When that agent processes a new product, it queries memory for supplier preferences, pricing rules, and past decisions -- all scoped to the relevant workspace.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Write Path Trade-off: Fast vs. Smart
&lt;/h2&gt;

&lt;p&gt;This was the single most impactful architecture decision in the project, and I almost got it wrong.&lt;/p&gt;

&lt;p&gt;Mem0 has a built-in feature where every &lt;code&gt;add()&lt;/code&gt; call passes through an LLM to extract structured entities and relationships from the raw text. It is smart. It is also slow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write with LLM extraction (infer=True):     5-8 seconds
Write with embedding only  (infer=False):   ~1.3 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a real-time chat, 5-8 seconds of latency on every memory write is unacceptable. The user is waiting for a response while the agent is quietly making an API call to Claude or GPT to analyze the memory content.&lt;/p&gt;

&lt;p&gt;The solution is a hot/cold split inspired by the &lt;a href="https://arxiv.org/abs/2602.13258" rel="noopener noreferrer"&gt;MAPLE paper&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                HOT PATH (real-time)              COLD PATH (scheduled)
                ~1.3s per write                   Runs at 3:00 AM

User talks  ──&amp;gt; mem0_add(infer=False) ──&amp;gt;  Qdrant    &amp;lt;── maintenance.py
to agent        Embedding only                         │
                No LLM extraction                      ├── Opus re-extraction
                No classification                      ├── Dedup (0.92 threshold)
                                                       ├── Classification
                                                       ├── Conflict detection
                                                       └── Decay scoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During the day, writes are fast: embed the text, store it with metadata, move on. The &lt;code&gt;infer=False&lt;/code&gt; flag tells Mem0 to skip the LLM extraction step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mem&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;infer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At 3 AM, the maintenance script wakes up, pulls all of today's memories, and re-processes them through a high-quality model (Claude Opus) for proper entity extraction, deduplication, and classification. It is the same data, but now with rich structure.&lt;/p&gt;

&lt;p&gt;This means there is a window -- roughly from when a memory is written until the next maintenance run -- where the memory exists but is not fully classified. In practice, this rarely matters because semantic search works on the embedding regardless. But if you needed a memory classified as &lt;code&gt;procedure&lt;/code&gt; vs &lt;code&gt;task_log&lt;/code&gt; for filtering, you would have to wait until maintenance runs.&lt;/p&gt;

&lt;p&gt;The fallback chain also deserves a mention. If Mem0 is unavailable at write time, the server does not drop the memory. It queues it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;mem&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;entry&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;write_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;write_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metadata&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;WRITE_QUEUE_PATH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;queued&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reason&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Mem0 unavailable, cached for replay&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the next successful write, the queue gets replayed. Writes are never lost, even when the backend crashes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Keeping Memory Alive: Decay, Consolidation, and Conflict
&lt;/h2&gt;

&lt;p&gt;Here is something most "memory for AI" projects get wrong: they treat memory as write-once-read-forever. Store it and forget about it (no pun intended).&lt;/p&gt;

&lt;p&gt;But memory decays. Memory conflicts. Memory fragments. If you have used your system for three months, you have duplicates, contradictions, and a pile of task logs from February that nobody will ever query again. Without maintenance, search quality degrades as noise drowns out signal.&lt;/p&gt;

&lt;p&gt;The smart-memory-gateway maintenance system has four cognitive engines, and each one draws from a different idea about how memory works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decay: The Forgetting Curve
&lt;/h3&gt;

&lt;p&gt;Inspired by &lt;a href="https://en.wikipedia.org/wiki/Retrieval_practice" rel="noopener noreferrer"&gt;Bjork's storage-retrieval strength theory&lt;/a&gt; and the &lt;a href="https://arxiv.org/abs/2601.18642" rel="noopener noreferrer"&gt;FadeMem paper&lt;/a&gt;, each memory gets an importance score that decays exponentially over time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I(t) = exp(-lambda * effective_age)

where:
  effective_age = age_days / (1 + min(log(1 + access_count), cap))
  lambda = ln(2) / half_life_days
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The crucial detail: &lt;code&gt;access_count&lt;/code&gt; slows the decay. A memory that gets retrieved often decays slower than one that was written and never read again. This mirrors how human memory works -- the more you recall something, the harder it is to forget.&lt;/p&gt;

&lt;p&gt;Different memory types have different half-lives:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Half-life&lt;/th&gt;
&lt;th&gt;Rationale&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;task_log&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;30 days&lt;/td&gt;
&lt;td&gt;"Deployed v2.1 to staging" -- useful for a month, noise after that&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;procedure&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;90 days&lt;/td&gt;
&lt;td&gt;"Restart command is X" -- lasts longer but can become stale&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;fact&lt;/code&gt;, &lt;code&gt;preference&lt;/code&gt;, &lt;code&gt;lesson&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Never decays&lt;/td&gt;
&lt;td&gt;"User's name is Y" -- these are permanent&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When importance drops below 0.10, the memory gets archived -- not deleted. It is soft removal. You can always un-archive if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consolidation: Turning Fragments Into Knowledge
&lt;/h3&gt;

&lt;p&gt;After a week of conversations, you might have 15 &lt;code&gt;task_log&lt;/code&gt; entries about debugging the same service. Individually, they are noise. Together, they contain a lesson.&lt;/p&gt;

&lt;p&gt;The consolidation engine groups related memories, sends them through an LLM, and produces a single &lt;code&gt;knowledge&lt;/code&gt; summary. The original fragments get an accelerated decay but are not deleted -- provenance is tracked through a &lt;code&gt;consolidated_from&lt;/code&gt; field.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conflict Resolution: When Memory Contradicts Itself
&lt;/h3&gt;

&lt;p&gt;"User prefers dark mode" and "User prefers light mode" -- stored two months apart. Which one is true?&lt;/p&gt;

&lt;p&gt;The conflict engine identifies same-type memories within the same scope that might contradict each other, then uses an LLM to judge: are they actually contradictory, or just about different topics? If contradictory, the newer one wins (with a &lt;code&gt;superseded_by&lt;/code&gt; marker on the loser).&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting It Together
&lt;/h3&gt;

&lt;p&gt;The maintenance runs on a two-tier schedule:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Daily (3:00 AM):
  1. Re-extract today's memories with Opus (high-quality LLM)
  2. Deduplicate (cosine similarity &amp;gt; 0.92 = auto-merge)

Weekly (Sunday):
  3. All daily steps, plus:
  4. Conflict detection and resolution
  5. Consolidation of fragmented task_logs
  6. Decay scoring and archival
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A typical weekly report (sent to Feishu automatically) looks like: 4 memories re-extracted, 2 duplicates merged, 1 conflict resolved, 3 task_log groups consolidated into knowledge, 7 old entries archived. Total: 142 active memories across 6 scopes.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Would Do Differently
&lt;/h2&gt;

&lt;p&gt;No project survives contact with production unchanged. Here is what I learned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The classification cascade was over-engineered early on.&lt;/strong&gt; I built a two-layer classifier (keyword rules + LLM fallback) before I had enough data to know which memory types actually mattered. In practice, the keyword layer catches about 70% of cases and the remaining 30% default to &lt;code&gt;task_log&lt;/code&gt; until maintenance reclassifies them. If I started over, I would ship with keywords-only and add the LLM layer only after accumulating a month of production data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope granularity hit a sweet spot.&lt;/strong&gt; I initially considered per-message scoping, which would have been too fine-grained. The 4-scope model (global / group / dm / agent) turned out to be exactly right. Every memory naturally fits one of these, and I have never needed a fifth scope.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The write queue was a late addition that saved me multiple times.&lt;/strong&gt; I did not plan for Qdrant downtime when I started. The first time Qdrant crashed during a conversation, memories were silently lost. Adding the JSONL write queue with replay-on-next-write was a 30-line change that made the system genuinely resilient. Build your degradation path early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time LLM extraction is a trap for multi-agent systems.&lt;/strong&gt; Every article about Mem0 shows the &lt;code&gt;infer=True&lt;/code&gt; default, where the LLM enriches memories on write. That works for a single-user chatbot. With 8 agents generating memories throughout the day, those 5-8 second writes would have serialized everything. The hot/cold split was the right call. If you are building for multiple agents, start with &lt;code&gt;infer=False&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What surprised me most:&lt;/strong&gt; how quickly semantic search quality improves once you add scope filtering. Without scopes, searching "deployment process" returns a mix of devops procedures and content publishing workflows. With scope, the results are precise and relevant immediately. Scoping is not just an access control feature; it is a search quality feature.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;The full source is at &lt;a href="https://github.com/lofder/smart-memory-gateway" rel="noopener noreferrer"&gt;github.com/lofder/smart-memory-gateway&lt;/a&gt;. It runs as a standard MCP server -- plug it into any MCP-compatible framework.&lt;/p&gt;

&lt;p&gt;If you want to see a real-world consumer of this memory system, check out my &lt;a href="https://dev.to/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b"&gt;dropshipping product import MCP server&lt;/a&gt;, which queries memory for supplier preferences and pricing rules every time it processes a product.&lt;/p&gt;

&lt;p&gt;The stack is intentionally simple: Python + FastMCP + Mem0 + Qdrant. No Kubernetes. No microservices. Just a single-process MCP server that starts with &lt;code&gt;python src/server.py&lt;/code&gt; and a Qdrant instance you can run from a single binary.&lt;/p&gt;

&lt;p&gt;Your agents do not have to start every conversation from scratch. Give them memory. They will thank you -- and more importantly, your users will stop repeating themselves.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>python</category>
      <category>architecture</category>
    </item>
    <item>
      <title>I Built an MCP Server to Automate Dropshipping Product Imports</title>
      <dc:creator>lofder.issac</dc:creator>
      <pubDate>Mon, 23 Mar 2026 06:38:33 +0000</pubDate>
      <link>https://forem.com/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b</link>
      <guid>https://forem.com/_95a3e57463e6442feacd0/i-built-an-mcp-server-to-automate-dropshipping-product-imports-3m5b</guid>
      <description>&lt;h1&gt;
  
  
  I Built an MCP Server to Automate Dropshipping Product Imports
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt; is a free, open-source MCP server (TypeScript) that automates dropshipping product imports from AliExpress/Alibaba to Shopify &amp;amp; Wix via DSers. 9 tools, 4 workflow prompts, zero-password auth, pre-push safety checks. Install: &lt;code&gt;npx @lofder/dsers-mcp-product login&lt;/code&gt;. No coding required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem That Wouldn't Go Away
&lt;/h2&gt;

&lt;p&gt;I've been running dropshipping stores on and off for a couple of years. If you've done it, you know the drill: find a product on AliExpress or a supplier platform, copy the title, download images, tweak the description, set your margins, push it to your Shopify store. Repeat. Fifty times. Every week.&lt;/p&gt;

&lt;p&gt;It's not hard work. It's just tedious work. And tedious work is where mistakes happen — wrong prices, missing variants, broken image links. I tried automating bits of it with scripts, browser extensions, even some Zapier flows. Nothing stuck. The workflows were too fragmented.&lt;/p&gt;

&lt;p&gt;Then MCP happened.&lt;/p&gt;

&lt;p&gt;If you haven't been following, the &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; is Anthropic's open standard for connecting AI models to external tools and data. When I first read the spec, I had one of those "wait, this is exactly what I need" moments. Instead of building a full app with a UI, authentication flows, and all that overhead, I could build a tool server that any MCP-compatible client (Claude Desktop, Cursor, etc.) could talk to.&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;dsers-mcp-product&lt;/a&gt; — an MCP server that handles the entire product import pipeline for DSers, from discovering products to pushing them to your connected stores.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP and Not Just a REST API?
&lt;/h2&gt;

&lt;p&gt;Fair question. I could have built a CLI tool or a REST service. But MCP gives you something those don't: conversational orchestration.&lt;/p&gt;

&lt;p&gt;With a traditional API, you write the glue code. You decide the order of operations, handle edge cases in your scripts, build retry logic. With MCP, the AI client becomes your orchestrator. You tell Claude "import this product and push it to my US store with a 2.5x markup" and it figures out which tools to call, in what order, and handles the back-and-forth.&lt;/p&gt;

&lt;p&gt;That's not a gimmick. For a workflow that has real branching logic (what if the product has 47 variants? what if some images fail to load? what if the store has listing rules that reject certain categories?), having an intelligent orchestrator is genuinely useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The server is built in TypeScript using the official MCP SDK. Here's the high-level picture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude / Cursor / Any MCP Client
        │
        ▼
┌─────────────────────────┐
│   dsers-mcp-product     │
│   (MCP Server)          │
│                         │
│  ┌─────────┐            │
│  │  Tools  │ ← 9 tools  │
│  └─────────┘            │
│  ┌─────────┐            │
│  │ Prompts │ ← 4 prompts│
│  └─────────┘            │
│                         │
│  Transport: stdio/SSE   │
└────────────┬────────────┘
             │
             ▼
       DSers Platform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing fancy. stdio transport for local clients like Claude Desktop, SSE for remote/web integrations. The server itself is stateless per-request — all state lives on the DSers side.&lt;/p&gt;

&lt;p&gt;I went with TypeScript because the MCP SDK has first-class TS support and because I wanted strong typing for the tool schemas. When you're defining tool parameters that an AI model will be calling, you really want those schemas to be precise. Vague schemas lead to vague tool calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 9 Tools
&lt;/h2&gt;

&lt;p&gt;Each tool maps to a distinct step in the product import workflow. I spent a lot of time thinking about granularity here — too few tools and each one becomes a god function with too many parameters; too many and the AI wastes tokens figuring out which to use.&lt;/p&gt;

&lt;p&gt;Here's what I landed on:&lt;/p&gt;

&lt;h3&gt;
  
  
  product.import
&lt;/h3&gt;

&lt;p&gt;The core tool. Takes a product URL or identifier and imports it into your DSers workspace. This handles fetching product data, images, variants, and specs. The heavy lifting.&lt;/p&gt;

&lt;h3&gt;
  
  
  product.preview
&lt;/h3&gt;

&lt;p&gt;Returns a structured preview of an imported product — title, price, images, variant matrix. This exists because I wanted the AI to be able to "look" at a product before deciding what to do with it. It's the equivalent of a human opening the product page and scanning it.&lt;/p&gt;

&lt;h3&gt;
  
  
  product.update_rules
&lt;/h3&gt;

&lt;p&gt;Incrementally edits pricing, content, image, or variant rules on a product that's already been imported. Before this existed, changing one rule meant re-importing the entire product. Now the AI can say "change the markup from 2x to 3x" without touching anything else. This was one of the most requested improvements — it makes the edit-then-push loop much tighter.&lt;/p&gt;

&lt;h3&gt;
  
  
  product.visibility
&lt;/h3&gt;

&lt;p&gt;Controls which variants and options are visible/active. Dropshipping products often come with 30+ variants and you only want to list 5-6. This tool lets you toggle visibility without deleting anything.&lt;/p&gt;

&lt;h3&gt;
  
  
  product.delete
&lt;/h3&gt;

&lt;p&gt;Removes a product from the DSers import list entirely. This is irreversible, so the tool requires explicit confirmation before proceeding. Useful for cleaning up test imports or products you've decided not to list.&lt;/p&gt;

&lt;h3&gt;
  
  
  store.discover
&lt;/h3&gt;

&lt;p&gt;Lists your connected stores with their status and capabilities. Before pushing a product, the AI needs to know where it can push to and what each store supports.&lt;/p&gt;

&lt;h3&gt;
  
  
  store.push
&lt;/h3&gt;

&lt;p&gt;Pushes a product to one or more connected stores. This is where pricing rules, category mapping, and store-specific formatting happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  rules.validate
&lt;/h3&gt;

&lt;p&gt;Validates a product against a store's listing rules before pushing. Does the title exceed the character limit? Are all required fields populated? Are the images the right dimensions? Better to catch this upfront than get a rejection.&lt;/p&gt;

&lt;h3&gt;
  
  
  job.status
&lt;/h3&gt;

&lt;p&gt;Some operations (especially bulk imports) are async. This tool checks the status of running jobs. Simple but necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Split?
&lt;/h3&gt;

&lt;p&gt;The key insight was separating read operations from write operations. &lt;code&gt;product.preview&lt;/code&gt;, &lt;code&gt;store.discover&lt;/code&gt;, &lt;code&gt;rules.validate&lt;/code&gt;, and &lt;code&gt;job.status&lt;/code&gt; are all read-only. The AI can call them freely to gather information and make decisions. &lt;code&gt;product.import&lt;/code&gt;, &lt;code&gt;product.update_rules&lt;/code&gt;, &lt;code&gt;product.visibility&lt;/code&gt;, &lt;code&gt;product.delete&lt;/code&gt;, and &lt;code&gt;store.push&lt;/code&gt; are the write operations that actually change state.&lt;/p&gt;

&lt;p&gt;This matters because MCP clients can implement approval flows for write operations. You might want the AI to auto-discover and preview products but require your confirmation before actually importing or pushing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 4 Prompts
&lt;/h2&gt;

&lt;p&gt;MCP prompts are pre-built conversation templates. Think of them as recipes that encode common workflows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;quick-import&lt;/code&gt; — Single product, single store. The "I just want this product in my store" flow. Calls import → preview → validate → push in sequence.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bulk-import&lt;/code&gt; — Multiple products from a search or category. Handles pagination, deduplication, and batch status tracking.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;multi-push&lt;/code&gt; — One product to multiple stores with per-store pricing and customization. Useful when you run stores in different markets.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;seo-optimize&lt;/code&gt; — Import a product, then let the AI rewrite the title and description for better search rankings before pushing. For sellers who want search-optimized listings without manual copywriting.&lt;/p&gt;

&lt;p&gt;These aren't magic. They're just well-structured prompt templates that guide the AI to use the tools in the right order. But they save a ton of back-and-forth compared to starting from scratch every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Publishing to 8 Platforms (and What I Learned)
&lt;/h2&gt;

&lt;p&gt;Building the server took about two weeks of evenings. Publishing it took... also about two weeks. That surprised me.&lt;/p&gt;

&lt;p&gt;Here's where &lt;code&gt;dsers-mcp-product&lt;/code&gt; is listed now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;npm&lt;/strong&gt; — The obvious first step. &lt;code&gt;npx @lofder/dsers-mcp-product&lt;/code&gt; just works. Getting the package.json right for an MCP server that supports both stdio and SSE took some fiddling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP Registry&lt;/strong&gt; — Anthropic's official directory. Straightforward submission, but you need good documentation. They actually review what you submit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smithery&lt;/strong&gt; — Probably the most developer-friendly MCP directory right now. Their submission process is smooth and they have nice testing tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Glama&lt;/strong&gt; — This one has an automated quality scoring system. &lt;code&gt;dsers-mcp-product&lt;/code&gt; scored AAA, which I'm pretty proud of. Their criteria push you to have good error handling, proper schema definitions, and comprehensive tool descriptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mcp.so&lt;/strong&gt; — Community-driven directory. Quick to list, good for visibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mcpservers.org&lt;/strong&gt; — Another community directory. Similar process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;punkpeye/awesome-mcp-servers&lt;/strong&gt; — The canonical awesome-list for MCP servers on GitHub. Opened a PR, got merged. The maintainers are responsive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product Hunt&lt;/strong&gt; — Listed as &lt;a href="https://www.producthunt.com/posts/dropshipping-mcp-dsers" rel="noopener noreferrer"&gt;Dropshipping MCP DSers&lt;/a&gt;. Another channel for discoverability outside the developer bubble.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Wish I Knew Before Publishing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Documentation is everything.&lt;/strong&gt; Each platform wants slightly different things. Some want a README focused on installation, others want use-case examples, others want GIFs showing it in action. I ended up writing three different versions of the docs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema quality matters more than you think.&lt;/strong&gt; Glama's AAA rating pushed me to tighten up every tool's input/output schema. Better descriptions, stricter validation, more helpful error messages. This wasn't busywork — it directly improved how well AI clients used the tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The npm package name matters.&lt;/strong&gt; I went with &lt;code&gt;@lofder/dsers-mcp-product&lt;/code&gt; under my scope. If I could redo it, I'd think harder about discoverability. People search for "mcp dropshipping" or "dsers mcp", not my username.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-platform testing is a time sink.&lt;/strong&gt; Claude Desktop on macOS, Cursor, and various web-based MCP clients all have slightly different behaviors. stdio transport works everywhere but SSE had quirks in some clients. Budget time for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Changed in My Workflow
&lt;/h2&gt;

&lt;p&gt;Before this project, importing 20 products and pushing them to two stores was a full evening's work. Now it's a conversation:&lt;/p&gt;

&lt;p&gt;"Import the top 5 products from [category], skip anything under 4 stars, apply 2.5x markup for the US store and 3x for the EU store, and push them."&lt;/p&gt;

&lt;p&gt;Claude calls the tools, shows me previews, validates against store rules, and pushes. I review and approve. The whole thing takes maybe 15 minutes, and I'm mostly just reading summaries and saying "yes."&lt;/p&gt;

&lt;p&gt;Is it perfect? No. Sometimes the AI picks weird variants to feature. Sometimes the pricing logic needs a nudge. But it's a 10x improvement over clicking through web UIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's new in v1.3
&lt;/h3&gt;

&lt;p&gt;Since the initial launch, a few things changed significantly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero-password authentication.&lt;/strong&gt; The old setup required putting your DSers email and password in a config file. That always felt wrong. Now you run &lt;code&gt;npx @lofder/dsers-mcp-product login&lt;/code&gt; — your browser opens the official DSers website, you log in there directly, and the tool picks up the session automatically. Your password never touches the MCP server. This alone made me much more comfortable recommending the tool to others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Incremental rule editing.&lt;/strong&gt; Before v1.3, changing a pricing rule meant re-importing the product from scratch. Now &lt;code&gt;product.update_rules&lt;/code&gt; lets you tweak rules on an already-imported product — change the markup, adjust the title prefix, swap out images — without losing any work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-push safety checks.&lt;/strong&gt; The server now automatically blocks pushes that would result in below-cost pricing, zero sell price, or all-variants-out-of-stock. It also detects conflicts between MCP pricing rules and your DSers store-level pricing rules, and tells you exactly how to resolve them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways for MCP Server Builders
&lt;/h2&gt;

&lt;p&gt;If you're thinking about building an MCP server, here's what I'd tell you:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with the workflow, not the API.&lt;/strong&gt; Map out what a human actually does, step by step. That's your tool list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separate reads from writes.&lt;/strong&gt; Let the AI gather info freely but gate mutations behind explicit tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in tool descriptions.&lt;/strong&gt; The model reads them to decide which tool to call. Vague descriptions → wrong tool calls → bad UX.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ship to multiple directories early.&lt;/strong&gt; Each platform has different audiences. Glama's scoring system alone made my server better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TypeScript + the official SDK is the path of least resistance.&lt;/strong&gt; It works. Don't fight it.&lt;/p&gt;

&lt;p&gt;The code is open source at &lt;a href="https://github.com/lofder/dsers-mcp-product" rel="noopener noreferrer"&gt;github.com/lofder/dsers-mcp-product&lt;/a&gt;. If you're in the dropshipping space and want to try it, &lt;code&gt;npx @lofder/dsers-mcp-product login&lt;/code&gt; gets you set up in 30 seconds. Issues and PRs welcome.&lt;/p&gt;

&lt;p&gt;And if you're building MCP servers for other e-commerce workflows, I'd genuinely love to hear about it. This ecosystem is still early and there's a lot of low-hanging fruit.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Looking for a hands-on setup guide?&lt;/strong&gt; I wrote a step-by-step tutorial for store owners who want to get started without reading code: &lt;a href="https://dev.to/_95a3e57463e6442feacd0/how-to-automate-aliexpress-to-shopify-product-import-with-ai-step-by-step-guide-3f5a"&gt;How to Automate AliExpress to Shopify Product Import with AI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find me on &lt;a href="https://github.com/lofder" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or my &lt;a href="https://lofder.github.io/" rel="noopener noreferrer"&gt;blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>opensource</category>
      <category>dropshipping</category>
    </item>
  </channel>
</rss>
