<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Grigoriy Krasilnikov</title>
    <description>The latest articles on Forem by Grigoriy Krasilnikov (@morgan_chester).</description>
    <link>https://forem.com/morgan_chester</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/morgan_chester"/>
    <language>en</language>
    <item>
      <title>A Production Pattern for AI Image Recognition Without Hardwiring Model Logic Into Your Backend</title>
      <dc:creator>Grigoriy Krasilnikov</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:46:43 +0000</pubDate>
      <link>https://forem.com/morgan_chester/a-production-pattern-for-ai-image-recognition-without-hardwiring-model-logic-into-your-backend-4814</link>
      <guid>https://forem.com/morgan_chester/a-production-pattern-for-ai-image-recognition-without-hardwiring-model-logic-into-your-backend-4814</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Direct model integration is not a crime.&lt;/p&gt;

&lt;p&gt;But if image recognition in your product is not the whole point of the system and is just one feature, an unpleasant thing shows up very quickly: the backend starts doing work that is not really its job.&lt;/p&gt;

&lt;p&gt;I am just describing the case of my own. Maybe it's not so important for you. But it's something - for me.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;This is not a holy war against SDKs.&lt;/p&gt;

&lt;p&gt;SDKs are not the real issue here.&lt;/p&gt;

&lt;p&gt;The real issue is where all the mess around the model should live so the application does not slowly start rotting from the inside.&lt;/p&gt;

&lt;p&gt;Because one thing is when AI is the product. In that case, yes, build around it.&lt;/p&gt;

&lt;p&gt;It is a different story when AI is just one function inside a larger system. A user uploads an image, and you want structured data out of it. A receipt. A document. A label. A business card. A book cover. It does not matter. The object changes clothes, but the mechanics stay the same.&lt;/p&gt;

&lt;p&gt;That is where I like a simple boundary: the app owns auth, validation, and business state, while &lt;code&gt;n8n&lt;/code&gt; owns the model-facing orchestration.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Have Seen This Before
&lt;/h2&gt;

&lt;p&gt;The first move is usually predictable.&lt;/p&gt;

&lt;p&gt;You need image recognition, so you wire the model directly into the backend.&lt;/p&gt;

&lt;p&gt;Sometimes that is fine. Really fine. No need to turn it into drama.&lt;/p&gt;

&lt;p&gt;But then the usual human mess begins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prompts need tuning&lt;/li&gt;
&lt;li&gt;model responses need cleaning&lt;/li&gt;
&lt;li&gt;something needs normalization&lt;/li&gt;
&lt;li&gt;something needs retries&lt;/li&gt;
&lt;li&gt;something needs a fallback&lt;/li&gt;
&lt;li&gt;somebody wants provider switching&lt;/li&gt;
&lt;li&gt;and one day you realize the model is not returning data, it is returning a semi-liquid mistake&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the point where the backend starts absorbing concerns that are not really application concerns.&lt;/p&gt;

&lt;p&gt;At first it looks harmless.&lt;/p&gt;

&lt;p&gt;Later it does not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the Real Problem Is
&lt;/h2&gt;

&lt;p&gt;The problem, as usual, is not where people like to look for it.&lt;/p&gt;

&lt;p&gt;The problem is not whether the model can look at an image.&lt;/p&gt;

&lt;p&gt;That is the easy part.&lt;/p&gt;

&lt;p&gt;The real problem is where all the surrounding mechanics end up living.&lt;/p&gt;

&lt;p&gt;Once prompts, branching, retries, output shaping, provider-specific details, and callback behavior get fused into backend code, even a small AI change starts behaving like an application release.&lt;/p&gt;

&lt;p&gt;Sometimes not even a small one.&lt;/p&gt;

&lt;p&gt;For a feature that is only one part of a bigger system, that is a bad trade.&lt;/p&gt;

&lt;p&gt;In cases like this, I want a different split:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;business state stays inside the app&lt;/li&gt;
&lt;li&gt;everything model-facing moves into a layer that is easier to change, inspect, and repair&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, that layer is often &lt;code&gt;n8n&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Short Version
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;the application owns auth, validation, state, and the final business decision&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; handles preprocessing, model calls, branching, retries, and transformation&lt;/li&gt;
&lt;li&gt;what comes back into the app is not magic but a contract-bound structured payload&lt;/li&gt;
&lt;li&gt;the model-facing logic stays easier to replace, observe, and iterate on&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What You Are Actually Building
&lt;/h2&gt;

&lt;p&gt;I do not think about this as "AI inside the app."&lt;/p&gt;

&lt;p&gt;It is a different construction.&lt;/p&gt;

&lt;p&gt;It is a narrow integration boundary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the app accepts an image&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; runs the workflow&lt;/li&gt;
&lt;li&gt;the app receives normalized output and does its normal work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is it.&lt;/p&gt;

&lt;p&gt;Put more bluntly, the app should remain an app, not turn into a nervous half-erased orchestrator for external APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Basic Split
&lt;/h2&gt;

&lt;p&gt;The split is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the application owns transport, auth, validation, and business state&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; owns the model-facing orchestration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That means the application does not need to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;which model is currently in use&lt;/li&gt;
&lt;li&gt;how prompts are written&lt;/li&gt;
&lt;li&gt;how retries are configured&lt;/li&gt;
&lt;li&gt;how the image is preprocessed&lt;/li&gt;
&lt;li&gt;whether the provider changes next month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, this usually leaves the app with two ordinary endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one accepts the image and starts the workflow&lt;/li&gt;
&lt;li&gt;one accepts normalized output and performs normal business work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And honestly, that is a good sign. When a system does not start growing extra limbs the moment you add AI, things are usually going in the right direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Browser
  │
  ▼
Your App
  │
  ├── accepts image + user context
  ├── assigns request ID / idempotency key
  ▼
n8n
  │
  ├── preprocess image
  ├── fetch domain context if needed
  ├── call the model
  ├── parse and normalize the result
  ├── run confidence / policy check
  ├── route to fallback or manual review if needed
  └── send a signed callback to the app
  ▼
Your App
  │
  ├── validate the contract
  ├── verify callback signature
  ├── apply domain logic
  └── store audit trail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What matters to me here is not only where the model call happens.&lt;/p&gt;

&lt;p&gt;What matters more is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;who owns state&lt;/li&gt;
&lt;li&gt;who validates output&lt;/li&gt;
&lt;li&gt;how retries live&lt;/li&gt;
&lt;li&gt;what exactly crosses the boundary back into the app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because anyone can draw a happy path.&lt;/p&gt;

&lt;p&gt;Then the real world shows up and starts hitting it in the head.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Difference in One Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;AI in backend&lt;/th&gt;
&lt;th&gt;AI orchestration via &lt;code&gt;n8n&lt;/code&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Prompts&lt;/td&gt;
&lt;td&gt;live in application code&lt;/td&gt;
&lt;td&gt;live in workflow logic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Retries / branching&lt;/td&gt;
&lt;td&gt;mixed into backend behavior&lt;/td&gt;
&lt;td&gt;live in the orchestration layer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Model switching&lt;/td&gt;
&lt;td&gt;drags backend risk with it&lt;/td&gt;
&lt;td&gt;changes at workflow level&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output shaping&lt;/td&gt;
&lt;td&gt;another special code path&lt;/td&gt;
&lt;td&gt;handled inside workflow transformations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Audit / execution visibility&lt;/td&gt;
&lt;td&gt;needs to be built by hand&lt;/td&gt;
&lt;td&gt;much of it is visible in the execution path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Business state&lt;/td&gt;
&lt;td&gt;mixed with AI flow&lt;/td&gt;
&lt;td&gt;stays in the app&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What the Application Should Actually Do
&lt;/h2&gt;

&lt;p&gt;I generally like the app to stay boring in places like this.&lt;/p&gt;

&lt;p&gt;Boring is a good sign here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Endpoint 1: start the workflow
&lt;/h3&gt;

&lt;p&gt;It should:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;accept the image&lt;/li&gt;
&lt;li&gt;assign a request ID or correlation ID&lt;/li&gt;
&lt;li&gt;validate size, MIME type, and coarse input constraints&lt;/li&gt;
&lt;li&gt;pass either the file or a storage reference into &lt;code&gt;n8n&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;pass user-scoped context if it is actually needed&lt;/li&gt;
&lt;li&gt;return either an immediate result or an accepted-processing response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build prompts&lt;/li&gt;
&lt;li&gt;call model SDKs directly&lt;/li&gt;
&lt;li&gt;parse model output&lt;/li&gt;
&lt;li&gt;implement retry policy for model failures&lt;/li&gt;
&lt;li&gt;carry vendor-specific tuning concerns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the payload is heavy or sensitive, I would much rather pass a short-lived storage reference than drag raw image bytes through every hop. Because I can. And because it usually means less swearing later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Endpoint 2: accept normalized output
&lt;/h3&gt;

&lt;p&gt;It should:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;accept structured JSON from &lt;code&gt;n8n&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;verify callback authenticity&lt;/li&gt;
&lt;li&gt;enforce idempotency on repeated delivery&lt;/li&gt;
&lt;li&gt;validate the payload against a stable contract&lt;/li&gt;
&lt;li&gt;create, update, or link entities&lt;/li&gt;
&lt;li&gt;record the result for audit and debugging&lt;/li&gt;
&lt;li&gt;return a normal application response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;care which exact model produced the result&lt;/li&gt;
&lt;li&gt;care how the image was analyzed&lt;/li&gt;
&lt;li&gt;care how retries and fallbacks lived upstream&lt;/li&gt;
&lt;li&gt;treat AI payload as some kind of sacred business object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If tomorrow the same payload comes from a CSV import, a human form, or an internal service, this endpoint should behave the same way.&lt;/p&gt;

&lt;p&gt;That is what a correct boundary looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Like This Pattern
&lt;/h2&gt;

&lt;p&gt;Because it puts responsibility where it belongs.&lt;/p&gt;

&lt;h3&gt;
  
  
  The app keeps:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;authentication&lt;/li&gt;
&lt;li&gt;permissions&lt;/li&gt;
&lt;li&gt;domain validation&lt;/li&gt;
&lt;li&gt;database writes&lt;/li&gt;
&lt;li&gt;auditability&lt;/li&gt;
&lt;li&gt;final business decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;n8n&lt;/code&gt; keeps:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;model orchestration&lt;/li&gt;
&lt;li&gt;prompt engineering&lt;/li&gt;
&lt;li&gt;image preprocessing&lt;/li&gt;
&lt;li&gt;retry behavior&lt;/li&gt;
&lt;li&gt;workflow branching&lt;/li&gt;
&lt;li&gt;fallback behavior&lt;/li&gt;
&lt;li&gt;execution visibility&lt;/li&gt;
&lt;li&gt;vendor switching&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I do not think of &lt;code&gt;n8n&lt;/code&gt; here as "the place where prompts live."&lt;/p&gt;

&lt;p&gt;That description is too poor.&lt;/p&gt;

&lt;p&gt;For me it is the operational layer around the model call. The layer where all the surrounding mess lives without being smeared across backend code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Most Important Part: Domain Context
&lt;/h2&gt;

&lt;p&gt;This is where it actually gets interesting.&lt;/p&gt;

&lt;p&gt;Because the whole story is not "send an image to AI."&lt;/p&gt;

&lt;p&gt;Any fool can do that.&lt;/p&gt;

&lt;p&gt;The interesting part starts when, before the model call, you pull live domain context from the app into the workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;category trees&lt;/li&gt;
&lt;li&gt;allowed statuses&lt;/li&gt;
&lt;li&gt;existing entities&lt;/li&gt;
&lt;li&gt;formatting rules&lt;/li&gt;
&lt;li&gt;language codes&lt;/li&gt;
&lt;li&gt;internal taxonomies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then instead of asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What category does this belong to?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;you ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Choose the best option from this exact list of categories that already exists in the system."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At that point the answer starts moving from "well, sounds plausible" toward "this can actually be fed back into the machine."&lt;/p&gt;

&lt;p&gt;That difference is not cosmetic.&lt;/p&gt;

&lt;p&gt;Without context, AI usually gives you text.&lt;/p&gt;

&lt;p&gt;With context, it sometimes starts giving you usable input.&lt;/p&gt;

&lt;p&gt;That is a very different conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Realistic Flow
&lt;/h2&gt;

&lt;p&gt;In practice, this usually looks like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the client uploads an image to the app&lt;/li&gt;
&lt;li&gt;the app validates the request and assigns a request ID&lt;/li&gt;
&lt;li&gt;the app passes the file or storage reference to &lt;code&gt;n8n&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; normalizes the image&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; pulls domain context&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; sends image and context to the model&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; parses and normalizes the result&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; runs confidence or policy checks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n8n&lt;/code&gt; either calls back or routes the case to manual review&lt;/li&gt;
&lt;li&gt;the app validates the callback and applies domain logic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For a huge number of real tasks, that is more than enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why &lt;code&gt;n8n&lt;/code&gt; Fits Here At All
&lt;/h2&gt;

&lt;p&gt;Because this is mostly an orchestration problem, not a "write some more backend code" problem.&lt;/p&gt;

&lt;p&gt;You need a layer that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wait for external APIs&lt;/li&gt;
&lt;li&gt;branch on errors&lt;/li&gt;
&lt;li&gt;survive transient failures&lt;/li&gt;
&lt;li&gt;transform payloads&lt;/li&gt;
&lt;li&gt;insert a review step&lt;/li&gt;
&lt;li&gt;call back into the app&lt;/li&gt;
&lt;li&gt;show execution history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yes, you can drag all of that into the application.&lt;/p&gt;

&lt;p&gt;You can also write your own bus, your own retry engine, your own tracing layer, and your own wrapper around the model.&lt;/p&gt;

&lt;p&gt;You can.&lt;/p&gt;

&lt;p&gt;The only question is why that should be the default move when AI is one function in the system, not the meaning of the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Notes
&lt;/h2&gt;

&lt;p&gt;This only works while trust boundaries do not turn into a circus.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provider secrets should not live in the app
&lt;/h3&gt;

&lt;p&gt;The model vendor key is better off in &lt;code&gt;n8n&lt;/code&gt;, not in the frontend and not in every corner of the main application.&lt;/p&gt;

&lt;h3&gt;
  
  
  The workflow entry point needs protection
&lt;/h3&gt;

&lt;p&gt;Do not expose a webhook that anybody can use to throw arbitrary images at your system. That is a bad idea not because it is morally naughty, but because it will hurt later and may also cost you money.&lt;/p&gt;

&lt;h3&gt;
  
  
  Callbacks need adult handling
&lt;/h3&gt;

&lt;p&gt;Use signed callbacks, a trusted caller policy, or both. Callback authenticity should be part of the contract, not a matter of faith.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do not log sensitive payloads carelessly
&lt;/h3&gt;

&lt;p&gt;That includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;raw image bytes&lt;/li&gt;
&lt;li&gt;base64 payloads&lt;/li&gt;
&lt;li&gt;bearer tokens&lt;/li&gt;
&lt;li&gt;full model payloads with user data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly the kind of thing that comes back later carrying an axe.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Gives You
&lt;/h2&gt;

&lt;p&gt;If you go this way, several very concrete things get easier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;changing prompts without rewriting application logic&lt;/li&gt;
&lt;li&gt;testing workflow behavior separately from core app behavior&lt;/li&gt;
&lt;li&gt;adding retries, branching, and review steps without backend sprawl&lt;/li&gt;
&lt;li&gt;changing models with less backend coupling&lt;/li&gt;
&lt;li&gt;feeding more domain context later&lt;/li&gt;
&lt;li&gt;keeping a clearer execution path for debugging&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What You Still Pay For
&lt;/h2&gt;

&lt;p&gt;Nothing here is free.&lt;/p&gt;

&lt;p&gt;You are adding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one more moving part&lt;/li&gt;
&lt;li&gt;one more network hop&lt;/li&gt;
&lt;li&gt;one more execution surface&lt;/li&gt;
&lt;li&gt;callback idempotency concerns&lt;/li&gt;
&lt;li&gt;audit and traceability work&lt;/li&gt;
&lt;li&gt;evaluation drift if the model starts behaving differently over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If &lt;code&gt;n8n&lt;/code&gt; goes down, the feature goes down with it.&lt;/p&gt;

&lt;p&gt;If the callback contract is sloppy, the integration will start rotting.&lt;/p&gt;

&lt;p&gt;If nobody watches executions, retries, and bad outputs, the workflow will quietly degrade without much warning.&lt;/p&gt;

&lt;p&gt;That is an honest price.&lt;/p&gt;

&lt;p&gt;I still prefer it to hiding the same complexity deeper in application code where it is harder to see and more annoying to change.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rule I Keep
&lt;/h2&gt;

&lt;p&gt;If the application owns business state, let it keep owning business state.&lt;/p&gt;

&lt;p&gt;If the workflow owns model calls, retries, validation gates, and payload cleanup, let it keep owning those too.&lt;/p&gt;

&lt;p&gt;Do not mix the two just because "well, technically we can."&lt;/p&gt;

&lt;p&gt;Technically, we can do lots of stupid things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Take
&lt;/h2&gt;

&lt;p&gt;I think the question "how do I avoid SDKs forever?" is slightly crooked to begin with.&lt;/p&gt;

&lt;p&gt;The more useful question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Where should the model-facing logic live so the rest of the system does not become fragile, muddy, and expensive to maintain?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When image recognition is just one feature inside a bigger product, &lt;code&gt;n8n&lt;/code&gt; acting as the orchestration layer is often a perfectly sane answer.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
