<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Patrick Cornelißen</title>
    <description>The latest articles on Forem by Patrick Cornelißen (@pcornelissen).</description>
    <link>https://forem.com/pcornelissen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pcornelissen"/>
    <language>en</language>
    <item>
      <title>Vibe coding: where it helps and where it breaks</title>
      <dc:creator>Patrick Cornelißen</dc:creator>
      <pubDate>Sun, 03 May 2026 10:00:00 +0000</pubDate>
      <link>https://forem.com/pcornelissen/vibe-coding-where-it-helps-and-where-it-breaks-71k</link>
      <guid>https://forem.com/pcornelissen/vibe-coding-where-it-helps-and-where-it-breaks-71k</guid>
      <description>&lt;p&gt;Vibe coding is one of those terms that sounds unserious until you notice how many people are actually doing it.&lt;/p&gt;

&lt;p&gt;The basic idea is simple: describe what you want, let an AI coding tool generate the implementation, run it, adjust the prompt, and keep going.&lt;/p&gt;

&lt;p&gt;It can feel magical. It can also go wrong very quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What vibe coding is good at
&lt;/h2&gt;

&lt;p&gt;Vibe coding works best when the problem is visible and forgiving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;small prototypes&lt;/li&gt;
&lt;li&gt;internal tools&lt;/li&gt;
&lt;li&gt;UI experiments&lt;/li&gt;
&lt;li&gt;scripts&lt;/li&gt;
&lt;li&gt;throwaway demos&lt;/li&gt;
&lt;li&gt;first drafts of a feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In those cases, speed matters more than perfect architecture. You can see whether the result works, and mistakes are usually cheap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it breaks
&lt;/h2&gt;

&lt;p&gt;The approach becomes risky when the code has to survive contact with real users.&lt;/p&gt;

&lt;p&gt;Typical failure modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hidden edge cases&lt;/li&gt;
&lt;li&gt;inconsistent state management&lt;/li&gt;
&lt;li&gt;weak error handling&lt;/li&gt;
&lt;li&gt;missing tests&lt;/li&gt;
&lt;li&gt;security issues&lt;/li&gt;
&lt;li&gt;duplicated logic&lt;/li&gt;
&lt;li&gt;code that "works" but nobody understands&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The danger is not that the AI writes bad code every time. The danger is that it often writes plausible code. Plausible code is harder to distrust.&lt;/p&gt;

&lt;h2&gt;
  
  
  The human job changes
&lt;/h2&gt;

&lt;p&gt;With vibe coding, the developer's role shifts. You spend less time typing boilerplate and more time judging direction.&lt;/p&gt;

&lt;p&gt;You still need to decide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what should be built&lt;/li&gt;
&lt;li&gt;which constraints matter&lt;/li&gt;
&lt;li&gt;whether the implementation fits the architecture&lt;/li&gt;
&lt;li&gt;which tests are missing&lt;/li&gt;
&lt;li&gt;whether the code is maintainable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you skip that judgment, the speed advantage turns into technical debt.&lt;/p&gt;

&lt;h2&gt;
  
  
  A better workflow
&lt;/h2&gt;

&lt;p&gt;Vibe coding becomes more useful when it is paired with review loops:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Ask the AI to implement a small slice.
2. Run the tests.
3. Ask the AI to explain the changes.
4. Review the diff yourself.
5. Ask for tests and edge cases.
6. Refactor before moving on.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The point is to keep each step small enough that you can still understand the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real skill
&lt;/h2&gt;

&lt;p&gt;The real skill is not prompting the model into writing more code. It is knowing when to stop and inspect.&lt;/p&gt;

&lt;p&gt;Good questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What assumptions did the model make?&lt;/li&gt;
&lt;li&gt;What happens when input is empty or invalid?&lt;/li&gt;
&lt;li&gt;Which part of this code would be hard to change later?&lt;/li&gt;
&lt;li&gt;Is there a simpler design?&lt;/li&gt;
&lt;li&gt;Would I be comfortable reviewing this in a pull request?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answer is no, the workflow is moving too fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Vibe coding is useful as a creative and exploratory mode. It is not a replacement for engineering discipline.&lt;/p&gt;

&lt;p&gt;Use it to get moving. Do not use it as an excuse to stop reading the code.&lt;/p&gt;




&lt;p&gt;This article is based on the German original on KIberblick:&lt;br&gt;
&lt;a href="https://kiberblick.de/artikel/grundlagen/vibe-coding/" rel="noopener noreferrer"&gt;https://kiberblick.de/artikel/grundlagen/vibe-coding/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>workflow</category>
      <category>webdev</category>
    </item>
    <item>
      <title>From prompt engineering to context engineering</title>
      <dc:creator>Patrick Cornelißen</dc:creator>
      <pubDate>Sat, 02 May 2026 10:00:00 +0000</pubDate>
      <link>https://forem.com/pcornelissen/from-prompt-engineering-to-context-engineering-39ka</link>
      <guid>https://forem.com/pcornelissen/from-prompt-engineering-to-context-engineering-39ka</guid>
      <description>&lt;p&gt;For a long time, "prompt engineering" meant finding the right words. Better instructions, clearer examples, stricter output formats.&lt;/p&gt;

&lt;p&gt;That still matters, but it is no longer the whole story. The more useful shift is from prompt engineering to context engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompting is only one layer
&lt;/h2&gt;

&lt;p&gt;A good prompt can improve an answer. But many failures do not come from bad wording. They come from missing context.&lt;/p&gt;

&lt;p&gt;The model does not know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your codebase conventions&lt;/li&gt;
&lt;li&gt;the current ticket&lt;/li&gt;
&lt;li&gt;the relevant API documentation&lt;/li&gt;
&lt;li&gt;your team's security rules&lt;/li&gt;
&lt;li&gt;which files changed&lt;/li&gt;
&lt;li&gt;what "done" means in this workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If that context is missing, the model has to guess. Better phrasing will not fix that.&lt;/p&gt;

&lt;h2&gt;
  
  
  What context engineering means
&lt;/h2&gt;

&lt;p&gt;Context engineering is the practice of deliberately shaping what the AI sees before it acts.&lt;/p&gt;

&lt;p&gt;That includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system instructions&lt;/li&gt;
&lt;li&gt;project documentation&lt;/li&gt;
&lt;li&gt;examples of good output&lt;/li&gt;
&lt;li&gt;relevant source files&lt;/li&gt;
&lt;li&gt;test results&lt;/li&gt;
&lt;li&gt;tool output&lt;/li&gt;
&lt;li&gt;business constraints&lt;/li&gt;
&lt;li&gt;role-specific requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not to dump everything into the context window. The goal is to provide the smallest useful context that lets the model make the right decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple example
&lt;/h2&gt;

&lt;p&gt;A weak request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Review this code.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A better prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Review this TypeScript code for bugs and readability.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A context-engineered workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Review the diff for this pull request.
Use our TypeScript conventions from AGENTS.md.
Pay special attention to error handling and tests.
Include only findings that could affect behavior, security or maintainability.
Reference file paths and lines.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last version is not just better wording. It tells the model what information matters and what kind of output is useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context beats cleverness
&lt;/h2&gt;

&lt;p&gt;Teams often spend too much time tuning a single prompt and too little time improving the surrounding workflow.&lt;/p&gt;

&lt;p&gt;Useful context usually comes from boring places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a clear project README&lt;/li&gt;
&lt;li&gt;a short architecture note&lt;/li&gt;
&lt;li&gt;well-named files&lt;/li&gt;
&lt;li&gt;test output&lt;/li&gt;
&lt;li&gt;a current ticket description&lt;/li&gt;
&lt;li&gt;examples of accepted work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI tools become much stronger when these sources are easy to retrieve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The risk of too much context
&lt;/h2&gt;

&lt;p&gt;More context is not always better. Huge context dumps can make the model slower, more expensive and less focused.&lt;/p&gt;

&lt;p&gt;Good context has shape:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;include what is relevant&lt;/li&gt;
&lt;li&gt;exclude stale information&lt;/li&gt;
&lt;li&gt;prefer source files over summaries when precision matters&lt;/li&gt;
&lt;li&gt;prefer summaries when the details are not needed&lt;/li&gt;
&lt;li&gt;keep instructions consistent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The skill is deciding what the model needs for this task, not what might be interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical team pattern
&lt;/h2&gt;

&lt;p&gt;One useful pattern is to move repeated instructions out of chat and into versioned files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;coding conventions&lt;/li&gt;
&lt;li&gt;review checklists&lt;/li&gt;
&lt;li&gt;release note format&lt;/li&gt;
&lt;li&gt;security rules&lt;/li&gt;
&lt;li&gt;writing style&lt;/li&gt;
&lt;li&gt;deployment steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then the AI tool can load those instructions when needed. This is more reliable than copying an old prompt from Slack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;Prompt engineering is still useful, but it is only one part of the workflow. The bigger advantage comes from giving AI systems the right context at the right moment.&lt;/p&gt;

&lt;p&gt;That is what makes the difference between a clever answer and a useful result.&lt;/p&gt;




&lt;p&gt;This article is based on the German original on KIberblick:&lt;br&gt;
&lt;a href="https://kiberblick.de/artikel/grundlagen/prompt-engineering-2026/" rel="noopener noreferrer"&gt;https://kiberblick.de/artikel/grundlagen/prompt-engineering-2026/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>prompting</category>
      <category>workflow</category>
      <category>productivity</category>
    </item>
    <item>
      <title>MCP explained: how AI tools connect to real systems</title>
      <dc:creator>Patrick Cornelißen</dc:creator>
      <pubDate>Fri, 01 May 2026 10:00:00 +0000</pubDate>
      <link>https://forem.com/pcornelissen/mcp-explained-how-ai-tools-connect-to-real-systems-2b1e</link>
      <guid>https://forem.com/pcornelissen/mcp-explained-how-ai-tools-connect-to-real-systems-2b1e</guid>
      <description>&lt;p&gt;Most AI tools started as isolated chat windows. You pasted in a prompt, copied the answer back out, and hoped the model had enough context.&lt;/p&gt;

&lt;p&gt;That model does not scale well. Modern AI agents need access to tools, files, APIs and structured context. That is the problem the Model Context Protocol, or MCP, tries to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MCP is
&lt;/h2&gt;

&lt;p&gt;MCP is a protocol for connecting AI applications to external tools and data sources. Instead of every AI app inventing its own plugin system, MCP defines a shared way for tools to expose capabilities to models.&lt;/p&gt;

&lt;p&gt;In practice, an MCP server can provide things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;database queries&lt;/li&gt;
&lt;li&gt;file access&lt;/li&gt;
&lt;li&gt;issue tracker data&lt;/li&gt;
&lt;li&gt;browser automation&lt;/li&gt;
&lt;li&gt;internal documentation search&lt;/li&gt;
&lt;li&gt;custom business tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI client can then discover and call those tools through a common interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;The interesting part is not "the model can call an API". That has been possible for a while. The interesting part is standardization.&lt;/p&gt;

&lt;p&gt;Without a shared protocol, every integration becomes a one-off bridge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI client A -&amp;gt; custom GitHub integration
AI client B -&amp;gt; different GitHub integration
AI client C -&amp;gt; yet another GitHub integration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With MCP, the shape becomes cleaner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI client -&amp;gt; MCP server -&amp;gt; tool or data source
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That does not remove all complexity, but it gives teams a better integration boundary.&lt;/p&gt;

&lt;h2&gt;
  
  
  A useful mental model
&lt;/h2&gt;

&lt;p&gt;Think of MCP servers as adapters between an AI agent and a real system.&lt;/p&gt;

&lt;p&gt;The model should not know every detail of your internal API. It should know that a tool exists, what it does, what parameters it accepts, and what kind of result it returns.&lt;/p&gt;

&lt;p&gt;That separation is important. It makes tool access easier to review, test and restrict.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where MCP helps
&lt;/h2&gt;

&lt;p&gt;Good MCP use cases are usually context-heavy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Summarize the open bugs for this release."&lt;/li&gt;
&lt;li&gt;"Find the related pull requests for this Jira ticket."&lt;/li&gt;
&lt;li&gt;"Check whether this API route has documentation."&lt;/li&gt;
&lt;li&gt;"Create a draft changelog from merged commits."&lt;/li&gt;
&lt;li&gt;"Look up our internal policy before answering."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In all of these examples, the model is useful only if it can reach the right context.&lt;/p&gt;

&lt;h2&gt;
  
  
  The security angle
&lt;/h2&gt;

&lt;p&gt;MCP also creates a new security boundary. A tool server can expose sensitive data or actions, so teams need to treat it like infrastructure, not like a harmless prompt helper.&lt;/p&gt;

&lt;p&gt;At minimum, think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;which tools are exposed&lt;/li&gt;
&lt;li&gt;which actions are read-only&lt;/li&gt;
&lt;li&gt;which actions mutate state&lt;/li&gt;
&lt;li&gt;how credentials are stored&lt;/li&gt;
&lt;li&gt;whether the model can reach production data&lt;/li&gt;
&lt;li&gt;how tool calls are logged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The protocol makes integration easier. It does not make governance optional.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to build an MCP server
&lt;/h2&gt;

&lt;p&gt;Do not build one just because MCP is trendy. Build one when the same tool or data source should be available to multiple AI clients or workflows.&lt;/p&gt;

&lt;p&gt;Good signs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the integration will be reused&lt;/li&gt;
&lt;li&gt;the data source is important enough to control&lt;/li&gt;
&lt;li&gt;tool behavior should be logged or tested&lt;/li&gt;
&lt;li&gt;the team wants one maintained integration instead of many ad hoc scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If it is a one-off experiment, a small script may be enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;MCP is useful because it gives AI tools a more stable way to interact with the systems teams already use. The biggest value is not novelty. It is making context and tool access explicit enough to maintain.&lt;/p&gt;




&lt;p&gt;This article is based on the German original on KIberblick:&lt;br&gt;
&lt;a href="https://kiberblick.de/artikel/grundlagen/mcp-model-context-protocol/" rel="noopener noreferrer"&gt;https://kiberblick.de/artikel/grundlagen/mcp-model-context-protocol/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>tools</category>
      <category>agents</category>
    </item>
    <item>
      <title>AI workflows with n8n: practical automation for teams</title>
      <dc:creator>Patrick Cornelißen</dc:creator>
      <pubDate>Thu, 30 Apr 2026 09:24:25 +0000</pubDate>
      <link>https://forem.com/pcornelissen/ai-workflows-with-n8n-practical-automation-for-teams-3604</link>
      <guid>https://forem.com/pcornelissen/ai-workflows-with-n8n-practical-automation-for-teams-3604</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop4kg6v6wfbvjjz8lf93.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop4kg6v6wfbvjjz8lf93.webp" alt="n8n workflow editor" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most AI tools are useful in isolation: write text, review code, summarize a document, classify a ticket. The bigger productivity jump starts when AI becomes part of an existing workflow.&lt;/p&gt;

&lt;p&gt;That is where &lt;a href="https://n8n.io" rel="noopener noreferrer"&gt;n8n&lt;/a&gt; gets interesting. It is an open-source workflow automation platform, similar in spirit to Zapier or Make, but with two important differences: it can be self-hosted, and its AI features are deep enough for real operational workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why n8n is worth a closer look
&lt;/h2&gt;

&lt;p&gt;n8n is built around visual workflows: triggers, nodes, branches, API calls and data transformations. For AI work, that means you can place a model call exactly where it belongs instead of treating AI as a separate chat window.&lt;/p&gt;

&lt;p&gt;Current reasons teams look at n8n for AI automation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self-hosting for teams that cannot send workflow data through another SaaS platform&lt;/li&gt;
&lt;li&gt;More than 700 integrations, including Slack, Jira, GitHub, Notion and Google Workspace&lt;/li&gt;
&lt;li&gt;AI nodes for OpenAI, Anthropic, LangChain, vector databases and agent workflows&lt;/li&gt;
&lt;li&gt;Human-in-the-loop steps where a workflow can pause for approval&lt;/li&gt;
&lt;li&gt;Pricing based on workflow executions rather than every single step&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For teams in regulated environments, the self-hosting option is the big one. API keys, customer data and workflow state can stay on infrastructure the team controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 1: AI-assisted support ticket routing
&lt;/h2&gt;

&lt;p&gt;A simple but useful automation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Trigger: New ticket in the helpdesk

Step 1: Claude classifies the ticket
  - Category: bug, feature request, question, complaint
  - Priority: high, medium, low
  - Affected product area

Step 2: If priority is high
  - Send a Slack message to the on-call team
  - Update the ticket priority in the helpdesk

Step 3: Generate a first response draft
  - Pause for human review
  - Send only after approval
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important part is not the model call itself. It is the handoff around it: routing, notification, review and a clear approval point before anything reaches a customer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 2: Sprint review summaries
&lt;/h2&gt;

&lt;p&gt;Another practical workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Trigger: Every Friday at 16:00

Step 1: Pull completed tickets from Jira
Step 2: Pull related pull requests and commits from GitHub
Step 3: Ask an AI node to produce a sprint summary
  - What shipped?
  - What is still open?
  - What risks should the team discuss?
Step 4: Post the summary to Slack or Teams
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This does not replace the sprint review. It removes the boring preparation work so the meeting can focus on decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 3: A lightweight RAG pipeline
&lt;/h2&gt;

&lt;p&gt;n8n can also orchestrate document workflows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Trigger A: New document in Google Drive or Confluence

Step 1: Extract the document text
Step 2: Split it into chunks
Step 3: Create embeddings
Step 4: Store them in a vector database

Trigger B: User asks a question

Step 1: Embed the question
Step 2: Retrieve matching document chunks
Step 3: Answer with context and source references
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not a replacement for a purpose-built knowledge platform. But for internal workflows, prototypes and small team automations, it can be enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Human-in-the-loop is the feature that matters
&lt;/h2&gt;

&lt;p&gt;AI workflows become risky when every model output turns directly into an action. n8n's human-in-the-loop pattern is useful because it lets the workflow pause before sensitive steps.&lt;/p&gt;

&lt;p&gt;Good candidates for approval gates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI-written emails to customers&lt;/li&gt;
&lt;li&gt;CRM or database updates&lt;/li&gt;
&lt;li&gt;Financially relevant decisions&lt;/li&gt;
&lt;li&gt;Ticket closures or priority changes&lt;/li&gt;
&lt;li&gt;Anything that could create legal, security or support fallout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best AI automation is not fully autonomous everywhere. It is selective: automate routine judgment, pause at the points where accountability matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Role-specific ideas
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Developers
&lt;/h3&gt;

&lt;p&gt;n8n is API-first and workflows can be exported as JSON. That makes versioning, review and deployment possible. Custom nodes can be written in TypeScript, and LangChain integrations make more complex agent workflows possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project managers
&lt;/h3&gt;

&lt;p&gt;Many useful workflows do not require code: status reports, meeting summaries, sprint review preparation and reminders. The visual editor is approachable enough to prototype without waiting for engineering capacity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Product owners
&lt;/h3&gt;

&lt;p&gt;Product teams can collect feedback from support tickets, app reviews and community channels, classify it with AI and turn it into a weekly product insight report.&lt;/p&gt;

&lt;h3&gt;
  
  
  QA
&lt;/h3&gt;

&lt;p&gt;QA teams can summarize CI results, detect regressions, create tickets automatically and notify the right channel when a test signal needs attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  n8n vs. Zapier vs. Make
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Criterion&lt;/th&gt;
&lt;th&gt;n8n&lt;/th&gt;
&lt;th&gt;Zapier&lt;/th&gt;
&lt;th&gt;Make&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Self-hosting&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI depth&lt;/td&gt;
&lt;td&gt;Strong, including LangChain and agent workflows&lt;/td&gt;
&lt;td&gt;Useful but less deep&lt;/td&gt;
&lt;td&gt;Good ChatGPT-style integrations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pricing model&lt;/td&gt;
&lt;td&gt;Per workflow execution&lt;/td&gt;
&lt;td&gt;Per task&lt;/td&gt;
&lt;td&gt;Per operation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open source&lt;/td&gt;
&lt;td&gt;Yes, fair-code model&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning curve&lt;/td&gt;
&lt;td&gt;Medium to high&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low to medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Human-in-the-loop&lt;/td&gt;
&lt;td&gt;Native pattern&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When n8n is a good fit
&lt;/h2&gt;

&lt;p&gt;n8n is strongest when a team needs flexible automation, controlled data flows and enough technical depth to go beyond simple "when this happens, post that" workflows.&lt;/p&gt;

&lt;p&gt;It is probably not the fastest tool for a single lightweight SaaS automation. Zapier or Make may still be quicker for that. But once AI enters the workflow, especially with private data or approval gates, n8n becomes much more compelling.&lt;/p&gt;

&lt;p&gt;The best way to evaluate it is small: pick one recurring workflow, build it end-to-end, add one AI step and one approval step, then see whether the process actually gets better.&lt;/p&gt;




&lt;p&gt;This article is based on the German original on KIberblick:&lt;br&gt;
&lt;a href="https://kiberblick.de/artikel/workflow/n8n-ki-workflow-automation/" rel="noopener noreferrer"&gt;https://kiberblick.de/artikel/workflow/n8n-ki-workflow-automation/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>n8n</category>
      <category>ai</category>
      <category>automation</category>
      <category>workflow</category>
    </item>
  </channel>
</rss>
