DEV Community

Thomas Collardeau
Thomas Collardeau

Posted on

Building BiasDetector: My Journey into AI Text Analysis with n8n and LLMs

The way text frames information can profoundly shape our understanding of events and ideas. As AI and Large Language Models (LLMs) become more prevalent, exploring how these tools can help us identify and understand linguistic nuances presents a fascinating challenge. This curiosity, combined with a passion for automation, led me to create BiasDetector — a project focused on using AI to analyze text for potential bias.

This post shares the journey, technical approach, and lessons learned while building this tool using n8n and LLMs.

Tech meets text: AI analyzes bias in the written word.Tech meets text: AI analyzes bias in the written word.

The Core Challenge: Focusing AI on Language, Not Just Entities

My interest in this area stemmed from exploring how LLMs could move beyond simple text generation or summarization to tackle more subtle analytical tasks. Early on, a key challenge emerged: LLMs possess vast knowledge from their training data. When analyzing text mentioning specific politicians or organizations, their pre-existing associations with these entities can inadvertently influence or distract from assessing linguistic bias.

This challenge led to BiasDetector’s central architectural decision: a multi-stage LLM chain designed to isolate and analyze linguistic patterns. The core concept involves a sequential process:

  1. Redactor LLM: “Neutralizes” the text by identifying specific entities — people, organizations, locations — and replacing them with generic placeholders. For example, “John Smith visited London” becomes “[PERSON_1] visited [CITY_1]”. This helps focus subsequent analysis while standardizing varied entity references for more consistent outcomes.

  2. Bias Analyzer LLM: Scrutinizes the anonymized text. Freed from specific entity knowledge, it can better focus on the language itself — phrasing, sentiment, and structure — to identify and analyze bias patterns.

  3. Descrambler LLM: Takes the placeholder-based analysis and original entity mappings (the “legend”) to intelligently reconstruct a fully human-readable report.

This chained approach enables more targeted and reliable analysis of how text frames its subject matter, providing greater consistency in outputs.

Orchestrating with n8n: From Web Form to Emailed Report

n8n served as the engine for this entire operation, providing the perfect platform to connect sequential LLM calls and manage data flow.

The BiasDetector n8n workflow: chaining LLMs for nuanced text analysis.The BiasDetector n8n workflow: chaining LLMs for nuanced text analysis.

User Input via JotForm

For the public demo, I created a simple JotForm (try it here: https://form.jotform.com/251364670893061). Users can paste text (up to 4,000 characters for the demo) and provide an email address. I focused on clear instructions and mandatory fields to ensure a smooth user experience.

The n8n Workflow in Action

The workflow follows a clean sequence:

  1. Form Submission: JotForm submission triggers the n8n workflow

  2. Redaction: Text goes to the Redactor LLM node, which returns anonymized text and a “legend” mapping placeholders back to original terms

  3. Analysis: Anonymized text passes to the Bias Analysis LLM node, prompted to identify bias patterns and output structured, four-part analysis in JSON format using placeholders

  4. Descrambling: The Bias Analyzer output and legend go to the Descrambler LLM node, which weaves original entity names back into the analysis

  5. Email Delivery: n8n’s data transformation assembles all components into an HTML email sent via Gmail

n8n’s visual interface, robust LLM integration nodes, and data handling features made managing this sequence and iterating on the logic incredibly efficient.

What BiasDetector Delivers

Users receive a detailed email report offering a multi-faceted look at their submitted text:

  • Original text

  • Clear explanation of the anonymization step (showing anonymized text snippet and sample legend)

  • Comprehensive four-part bias analysis with original terms restored

  • Bias Analysis Summary

  • Potential Impact of Bias

  • Alternative Framing Strategies

This report helps users understand not just how language can convey bias, but how to discern the likely intended impact or persuasive goal of a piece of text. For example, it might highlight not just overtly critical language toward a person, but reveal patterns like consistently associating an organization with negative outcomes — even subtly — that collectively work to frame that organization unfavorably.

By analyzing these linguistic patterns in isolation from our existing knowledge about the entities involved, BiasDetector can help users recognize when an article might be attempting to shift public opinion, build support for a particular viewpoint, or prime readers to view certain actors favorably or unfavorably. This makes it easier to identify the underlying persuasive intent that might not be immediately obvious when reading the text normally.

Lessons from the Redactor: Iterating Towards Intelligent Anonymization

A significant part of this journey involved iteratively refining the Redactor LLM’s prompt — the instructions guiding its anonymization task. This wasn’t simple find-and-replace; it was an exercise in guiding an LLM to understand nuanced requirements.

Early on, I grappled with optimal redaction levels. Too little redaction leaves the Bias Analyzer potentially influenced by prior entity knowledge. Too aggressive redaction strips away crucial contextual clues or linguistic patterns that indicate bias. Removing descriptive common nouns associated with entities, for instance, might make text too abstract for meaningful analysis.

The real challenge lay in ensuring redaction was thorough and semantically intelligent. If text mentioned “Rome” and later referred to it as “the Eternal City,” how could the system recognize these as the same entity? The goal was for the Bias Analyzer to understand that bias expressed toward “the Eternal City” was actually bias related to “Rome.”

This required the Redactor to do more than assign [CITY_1] to “Rome” — it needed to link related terms.

The Power of LLMs for Semantic Understanding

Through cycles of prompt testing and adjustment, I experimented with different ways to instruct the Redactor to identify not just primary entities but also their aliases, acronyms, or descriptive references. Success came through careful prompt engineering that guided the LLM to generate linked placeholders. The refined Redactor could identify “Rome” as [CITY_1] and recognize “the Eternal City” as [CITY_1_NICKNAME], with explicit linking in the placeholder nomenclature itself.

This is precisely where LLMs shine for such tasks. Their inherent understanding of semantic meaning and context allows them to identify relationships — like linking a city to its well-known nickname — that would be extraordinarily complex and brittle to achieve with traditional rule-based programming, and impossibly laborious to do manually at scale. The LLM’s ability to grasp these nuances was fundamental to developing a Redactor that could intelligently prepare text for analysis.

This iterative dialogue with the Redactor LLM — defining problems with increasing precision, crafting nuanced instructions, observing outputs across diverse texts, and continually refining prompts — was where much of the project’s learning occurred. It highlighted that effective AI application often lies in this detailed, persistent conversation with the model.

The Road Ahead

While BiasDetector’s V1 demo is now functional and represents a significant milestone, the potential for further development is substantial. Future possibilities include:

  • Refining analytical depth

  • Exploring applications to different content types (government press releases, children’s literature, academic papers)

  • Expanding into related areas like logical fallacy detection

The core architecture built with n8n provides a solid foundation for these future explorations, demonstrating how modern no-code platforms can orchestrate sophisticated AI workflows to tackle complex analytical challenges.

Conclusion

BiasDetector has been an incredibly rewarding project. It pushed me to combine thoughtful prompt engineering, workflow automation with n8n, and a critical approach to AI-driven text analysis. Building it involved not just connecting APIs, but truly designing an intelligent process to navigate the complexities of LLM behavior and natural language. It’s a clear demonstration of how these modern tools can be orchestrated to create applications that offer genuine insight. The journey from initial concept to a functioning, user-facing demo has been a significant and fulfilling experience in applied AI.

Developer-first embedded dashboards

Developer-first embedded dashboards

Ship pixel-perfect dashboards that feel native to your app with Embeddable. It's fast, flexible, and built for devs.

Get early access

Top comments (0)

For IaC Practitioners, By IaC Practitioners

For IaC Practitioners, By IaC Practitioners

Learn how to embed security from day one using policy-as-code, AI-driven scanning, and strong collaboration between engineering and cybersecurity teams at IaCConf on Wednesday August 27, 2025.

Join us on August 27

👋 Kindness is contagious

Discover fresh viewpoints in this insightful post, supported by our vibrant DEV Community. Every developer’s experience matters—add your thoughts and help us grow together.

A simple “thank you” can uplift the author and spark new discussions—leave yours below!

On DEV, knowledge-sharing connects us and drives innovation. Found this useful? A quick note of appreciation makes a real impact.

Okay