<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Stephanie ozor</title>
    <description>The latest articles on Forem by Stephanie ozor (@stephanie_ozor_1e7b693226).</description>
    <link>https://forem.com/stephanie_ozor_1e7b693226</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/stephanie_ozor_1e7b693226"/>
    <language>en</language>
    <item>
      <title>Are We Reaching The Limit Of AI Reliability?</title>
      <dc:creator>Stephanie ozor</dc:creator>
      <pubDate>Sat, 30 Aug 2025 10:54:26 +0000</pubDate>
      <link>https://forem.com/stephanie_ozor_1e7b693226/are-we-reaching-the-limit-of-ai-reliability-58g7</link>
      <guid>https://forem.com/stephanie_ozor_1e7b693226/are-we-reaching-the-limit-of-ai-reliability-58g7</guid>
      <description>&lt;p&gt;Over the past few years, Large Language Models (LLMs) like GPT-4 and Claude have blown us away with their ability to generate text, code, and reasoning that sounds almost human.&lt;br&gt;
But here’s a growing concern I’ve been reflecting on: Why do these models consistently produce answers that are “almost right”, but still not quite accurate when it really matters? From writing scientific explanations to generating critical software code, there’s a strange ceiling on their precision.&lt;br&gt;
I’ve been exploring a provocative theory called &lt;strong&gt;Holographic Data Degradation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It borrows from how holograms work in physics: information is stored across the entire structure, not in isolated spots. In neural networks, this means data is distributed across layers in a wave-like, non-local manner. So if part of the model becomes slightly distorted, the entire output can subtly unravel no matter how much we scale it.&lt;br&gt;
This could explain:&lt;br&gt;
🔹 Why LLMs fail at consistent reasoning&lt;br&gt;
🔹 Why fine-tuning doesn't fix everything&lt;br&gt;
🔹 Why bigger models aren’t always better&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if the real limitation isn’t data or size, but the architecture of representation itself?&lt;/strong&gt;&lt;br&gt;
Imagine rethinking model design: modular memory, non-holographic encoding, or architectures inspired by capsule networks or sparse graphs. It could be the leap we need to move beyond “almost right” and into truly reliable AI.&lt;/p&gt;

&lt;p&gt;This idea is still evolving, but I believe it opens up a new path in AI theory and development, especially for high-stakes sectors like legal tech, medicine, and safety-critical software.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Have you encountered this “subtle degradation” in your work with LLMs?&lt;/strong&gt; Let’s discuss.&lt;/p&gt;

&lt;h1&gt;
  
  
  AI #MachineLearning #LLM #ArtificialIntelligence #DeepLearning #Neuroscience #EmergingTech #AIResearch #TechInnovation
&lt;/h1&gt;

</description>
      <category>llm</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>How to Use NLP to Extract Business Insights from Chat Data</title>
      <dc:creator>Stephanie ozor</dc:creator>
      <pubDate>Sun, 03 Aug 2025 00:11:48 +0000</pubDate>
      <link>https://forem.com/stephanie_ozor_1e7b693226/how-to-use-nlp-to-extract-business-insights-from-chat-data-197a</link>
      <guid>https://forem.com/stephanie_ozor_1e7b693226/how-to-use-nlp-to-extract-business-insights-from-chat-data-197a</guid>
      <description>&lt;p&gt;The digital age has gifted us a new frontier of data: unstructured text. From customer support chats to online survey responses, this data holds a wealth of information. The challenge, however, is turning this raw text into actionable business insights. This is where Natural Language Processing (NLP) comes in.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore how to leverage NLP to extract meaningful information from unstructured data, using a real-world application as our guide: an AI-based mental health behaviour recognition project from conversations between a human and a chatbot. This approach can be applied to a variety of business use cases, from improving customer support to detecting early signs of mental distress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Toolkit: A Primer on the Technologies&lt;/strong&gt;&lt;br&gt;
To tackle this problem, we rely on a stack of powerful Python libraries:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;NLTK &amp;amp; spaCy: These are fundamental libraries for text pre-processing. They help us clean and tokenize the data, removing irrelevant words (stop words), and standardizing the text for analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scikit-learn: A machine learning powerhouse. We use it to build and train our models. Its functionality for feature extraction (like creating a Bag-of-Words or TF-IDF representation of the text) is crucial for converting text into a numerical format that our model can understand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support Vector Machines (SVM): A supervised machine learning algorithm that is particularly effective for classification tasks. In our case, it can be used to classify conversations based on the presence of certain behaviours or sentiments.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Real-World Application: Mental Health Behaviour Recognition&lt;br&gt;
The project, which you can find on my GitHub repository, focuses on analysing conversations to identify mental health behaviours. While this is a sensitive and specialized application, the core NLP methodology is universally applicable to any business seeking to understand its customers better.&lt;/p&gt;

&lt;p&gt;The process typically involves these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Data Collection and Pre-processing: The first step is to gather the conversational data. This could be chat logs from a customer service platform or anonymized survey responses. Using NLTK or spaCy, we clean this data by removing punctuation, converting text to lowercase, and lemmatizing words to their root form.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Feature Extraction: Text data cannot be fed directly into a machine learning model. We must convert it into numerical features. A common method is TF-IDF (Term Frequency-Inverse Document Frequency), which weighs words based on their importance in a document and across the entire dataset. This allows the model to focus on words that are most relevant for classification.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3.Model Training: With our numerical features, we can train a supervised learning model like an SVM. For this, we need a labelled dataset where conversations are pre-classified. In a business context, this could mean tagging support chats as “positive,” “negative,” or “needs follow-up.” In the mental health project, the data is tagged with specific behavioural indicators.&lt;/p&gt;

&lt;p&gt;4.Prediction and Analysis: Once the model is trained, it can be used to analyse new, unseen data. The model can then predict the class of a new conversation, allowing businesses to automatically triage support tickets, identify product issues, or, as in our case, recognize patterns of mental health behaviours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Business Impact&lt;/strong&gt;&lt;br&gt;
The insights gained from this process can be transformative for a business:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Improving Customer Support: By analysing support chats, a company can automatically identify high-priority issues, route complex problems to specialized agents, or even provide real-time suggestions to agents based on the customer’s sentiment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Product Development: Analysing feedback from surveys and app reviews can reveal common pain points or feature requests, helping to guide the product roadmap.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Early Detection and Intervention: In a healthcare setting, this technology could be used to flag at-risk individuals based on their conversational patterns, enabling timely intervention and better care.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The techniques used in the mental health chatbot project demonstrate the power of NLP to turn messy, unstructured text into a valuable asset. The principles of pre-processing, feature engineering, and classification are a roadmap for any organization looking to extract actionable insights from their conversational data.&lt;/p&gt;

&lt;p&gt;You can explore the full project and its code at: &lt;a href="https://github.com/Stepha-code/AI-based-mental-health-behaviour-recognition-from-conversations-between-human-and-chatbot" rel="noopener noreferrer"&gt;https://github.com/Stepha-code/AI-based-mental-health-behaviour-recognition-from-conversations-between-human-and-chatbot&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>nlp</category>
      <category>python</category>
    </item>
  </channel>
</rss>
