<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Julian</title>
    <description>The latest articles on Forem by Julian (@julsr_mx).</description>
    <link>https://forem.com/julsr_mx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/julsr_mx"/>
    <language>en</language>
    <item>
      <title>Why I Love These Kinds of Challenges 🔥</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Sat, 28 Feb 2026 03:24:13 +0000</pubDate>
      <link>https://forem.com/julsr_mx/why-i-love-these-kinds-of-challenges-15n9</link>
      <guid>https://forem.com/julsr_mx/why-i-love-these-kinds-of-challenges-15n9</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First of all, I absolutely love events like this! More than the economic rewards, they push you to explore new platforms and tools. Once you grasp what each one can really do, the potential you unlock is simply incredible! Huge thanks to DevTo for creating this space. 🙌&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;This particular challenge felt super interesting because it encouraged building your portfolio in a more intuitive way. In my opinion, classic portfolios are becoming less common when job hunting these days — but they remain an amazing tool for self-reflection: realizing everything you’ve learned, achieved, and how far you’ve come.&lt;br&gt;
That’s why I decided to theme mine around trading 📈. I’ve been trading for several years now, and lately I’ve had much more consistent success. We traders spend countless hours in terminals watching charts, analyzing news, tracking prices… So why not turn the place where we live every day into a showcase of my professional experience?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6hcxgpaz37o988khuml.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6hcxgpaz37o988khuml.jpg" alt="What should I do?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the support of Gemini, solid documentation, and several reference examples, I was finally able to bring my project to life. Here are the key sections I created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A live ticker that runs constantly, displaying year-over-year improvement percentages for each technology I use.&lt;/li&gt;
&lt;li&gt;A line/pie chart combo showing my professional growth from the very beginning, the industries I’ve worked in, and the experience I’ve accumulated.&lt;/li&gt;
&lt;li&gt;A trade history section designed like a real position log — you can see how long I held each trade, the role I played, and the simulated successful impact of every operation.&lt;/li&gt;
&lt;li&gt;Technologies classified by years of experience, visualized as volume bars for quick understanding.&lt;/li&gt;
&lt;li&gt;My favorite part: a gallery powered by Power BI Embedded — visitors can explore real public reports I’ve built without ever leaving the page, giving a clear idea of the visual storytelling I can deliver.&lt;/li&gt;
&lt;li&gt;Finally, a studies/highlights section styled like quick news cards to review my education and key milestones.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://cv-juls-v3-762834586558.us-central1.run.app/"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;LINK:&lt;a href="https://cv-juls-v3-762834586558.us-central1.run.app" rel="noopener noreferrer"&gt;Interactive Trading CV&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Gemini Feedback 🌐
&lt;/h2&gt;

&lt;p&gt;One thing I didn’t know before was how straightforward it is to use Google for free/static hosting of your site — it was awesome! At first, I struggled to understand the instructions and exactly what was required. But after reading the docs and leaning on Gemini again, everything became surprisingly simple. You connect it directly to your repo and it just works.&lt;br&gt;
Honestly, this hosting part was the most challenging for me. I was also worried about potential future charges, but setting up Google billing alerts for when costs exceed a certain threshold, plus sticking to the simplest hosting tier, kept everything under control and protected me from surprise bills — something many people don’t consider when jumping into these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Step: Your Feedback on My CV! 📄
&lt;/h2&gt;

&lt;p&gt;I’d love to give my CV a second pass and hear your thoughts. Do you find it easy to read and understand? Any tips or suggestions would be super valuable!&lt;br&gt;
Thanks for continuing to create spaces where we learn and grow every single day! Big hug to everyone — let’s keep building this amazing community together. 🤗&lt;br&gt;
What do you think of the trading-themed portfolio approach? Would love to read your comments or experiences below! 🚀&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
      <category>portfoliotips</category>
    </item>
    <item>
      <title>Why I Love These Kinds of Challenges 🔥</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Sat, 28 Feb 2026 03:24:13 +0000</pubDate>
      <link>https://forem.com/julsr_mx/why-i-love-these-kinds-of-challenges-4m5o</link>
      <guid>https://forem.com/julsr_mx/why-i-love-these-kinds-of-challenges-4m5o</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First of all, I absolutely love events like this! More than the economic rewards, they push you to explore new platforms and tools. Once you grasp what each one can really do, the potential you unlock is simply incredible! Huge thanks to DevTo for creating this space. 🙌&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;This particular challenge felt super interesting because it encouraged building your portfolio in a more intuitive way. In my opinion, classic portfolios are becoming less common when job hunting these days — but they remain an amazing tool for self-reflection: realizing everything you’ve learned, achieved, and how far you’ve come.&lt;br&gt;
That’s why I decided to theme mine around trading 📈. I’ve been trading for several years now, and lately I’ve had much more consistent success. We traders spend countless hours in terminals watching charts, analyzing news, tracking prices… So why not turn the place where we live every day into a showcase of my professional experience?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6hcxgpaz37o988khuml.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6hcxgpaz37o988khuml.jpg" alt="What should I do?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the support of Gemini, solid documentation, and several reference examples, I was finally able to bring my project to life. Here are the key sections I created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A live ticker that runs constantly, displaying year-over-year improvement percentages for each technology I use.&lt;/li&gt;
&lt;li&gt;A line/pie chart combo showing my professional growth from the very beginning, the industries I’ve worked in, and the experience I’ve accumulated.&lt;/li&gt;
&lt;li&gt;A trade history section designed like a real position log — you can see how long I held each trade, the role I played, and the simulated successful impact of every operation.&lt;/li&gt;
&lt;li&gt;Technologies classified by years of experience, visualized as volume bars for quick understanding.&lt;/li&gt;
&lt;li&gt;My favorite part: a gallery powered by Power BI Embedded — visitors can explore real public reports I’ve built without ever leaving the page, giving a clear idea of the visual storytelling I can deliver.&lt;/li&gt;
&lt;li&gt;Finally, a studies/highlights section styled like quick news cards to review my education and key milestones.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://cv-juls-v3-762834586558.us-central1.run.app/"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;LINK:&lt;a href="https://cv-juls-v3-762834586558.us-central1.run.app" rel="noopener noreferrer"&gt;Interactive Trading CV&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Gemini Feedback 🌐
&lt;/h2&gt;

&lt;p&gt;One thing I didn’t know before was how straightforward it is to use Google for free/static hosting of your site — it was awesome! At first, I struggled to understand the instructions and exactly what was required. But after reading the docs and leaning on Gemini again, everything became surprisingly simple. You connect it directly to your repo and it just works.&lt;br&gt;
Honestly, this hosting part was the most challenging for me. I was also worried about potential future charges, but setting up Google billing alerts for when costs exceed a certain threshold, plus sticking to the simplest hosting tier, kept everything under control and protected me from surprise bills — something many people don’t consider when jumping into these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Step: Your Feedback on My CV! 📄
&lt;/h2&gt;

&lt;p&gt;I’d love to give my CV a second pass and hear your thoughts. Do you find it easy to read and understand? Any tips or suggestions would be super valuable!&lt;br&gt;
Thanks for continuing to create spaces where we learn and grow every single day! Big hug to everyone — let’s keep building this amazing community together. 🤗&lt;br&gt;
What do you think of the trading-themed portfolio approach? Would love to read your comments or experiences below! 🚀&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
      <category>portfoliotips</category>
    </item>
    <item>
      <title>CLI Tool that analyzes git repos and generates beautiful documentation!</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Mon, 16 Feb 2026 02:57:16 +0000</pubDate>
      <link>https://forem.com/julsr_mx/cli-tool-that-analyzes-git-repos-and-generates-beautiful-documentation-4e47</link>
      <guid>https://forem.com/julsr_mx/cli-tool-that-analyzes-git-repos-and-generates-beautiful-documentation-4e47</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;h2&gt;
  
  
  💡 Project Background
&lt;/h2&gt;

&lt;p&gt;We can't deny that using "vibe coding" and AI tools makes our process of creating new programs and software much more efficient. However, one major issue I've encountered is that we can sometimes lose visibility into what's actually being built—or even forget what we did in previous commits.&lt;br&gt;
That's why I decided to build a CLI tool powered by GitHub Copilot to solve this problem!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DocWeave&lt;/strong&gt; is a &lt;strong&gt;command-line tool&lt;/strong&gt; that analyzes your git repository and generates organized documentation using GitHub Copilot CLI. It transforms your commit history into markdown docs, Mermaid diagrams, and AI-powered insights.&lt;br&gt;
DocWeave extends AI to post-coding tasks. It turns rapid development into professional, well-documented outputs, making developers and teams more efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem Solved
&lt;/h3&gt;

&lt;p&gt;Developers often face "documentation debt" where code evolves faster than docs, leading to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Onboarding delays for new team members&lt;/li&gt;
&lt;li&gt;Bugs from misunderstandings&lt;/li&gt;
&lt;li&gt;Reduced productivity (20-50% of dev hours on docs per Gartner reports)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DocWeave automates documentation creation, saving time and improving project maintainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✨ Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🤖 &lt;strong&gt;AI-Powered Analysis&lt;/strong&gt;: Uses GitHub Copilot CLI to understand code changes and provide context&lt;/li&gt;
&lt;li&gt;📊 &lt;strong&gt;Visual Diagrams&lt;/strong&gt;: Generates Mermaid diagrams (timelines, file relationships, importance charts)&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Auto-Documentation&lt;/strong&gt;: Creates organized markdown files in &lt;code&gt;DocweaveDocs/&lt;/code&gt; folder&lt;/li&gt;
&lt;li&gt;🎯 &lt;strong&gt;Next Steps&lt;/strong&gt;: Suggests actionable next steps based on code analysis&lt;/li&gt;
&lt;li&gt;⚡ &lt;strong&gt;Simple CLI&lt;/strong&gt;: Just run &lt;code&gt;docweave analyze&lt;/code&gt; in any git repository&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Graceful Fallback&lt;/strong&gt;: Works even without Copilot CLI using intelligent heuristics&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📝 Example Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solo Developer&lt;/strong&gt;: After a coding session, analyze commits and generate documentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team Collaboration&lt;/strong&gt;: Generate changelogs and architecture diagrams for team reviews&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Review&lt;/strong&gt;: Auto-document PR changes with AI-powered context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding&lt;/strong&gt;: Create up-to-date documentation for new team members&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project Handoff&lt;/strong&gt;: Generate comprehensive documentation before transferring a project&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/e6Po_lqheyQ"&gt;
  &lt;/iframe&gt;


&lt;br&gt;
Repo:&lt;a href="https://github.com/Juls95/DocWeave/tree/main" rel="noopener noreferrer"&gt;https://github.com/Juls95/DocWeave/tree/main&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw41tj9744deg5yib9tb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw41tj9744deg5yib9tb.png" alt="Recent Changes"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmnp2i9h1ics58dqnyx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmnp2i9h1ics58dqnyx9.png" alt="Diagrams"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📖 Usage
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Basic Usage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Navigate to any git repository&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /path/to/your/repo

&lt;span class="c"&gt;# Analyze last 5 commits (default)&lt;/span&gt;
docweave analyze

&lt;span class="c"&gt;# Analyze only the last commit (quick)&lt;/span&gt;
docweave analyze &lt;span class="nt"&gt;--last&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Options
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Analyze last commit only (quick analysis)&lt;/span&gt;
docweave analyze &lt;span class="nt"&gt;--last&lt;/span&gt;

&lt;span class="c"&gt;# Analyze last 5 commits (default)&lt;/span&gt;
docweave analyze

&lt;span class="c"&gt;# Analyze specific number of commits&lt;/span&gt;
docweave analyze &lt;span class="nt"&gt;--limit&lt;/span&gt; 20

&lt;span class="c"&gt;# Analyze specific repository&lt;/span&gt;
docweave analyze &lt;span class="nt"&gt;--path&lt;/span&gt; /path/to/repo
&lt;span class="c"&gt;# or short: docweave analyze -p /path/to/repo&lt;/span&gt;

&lt;span class="c"&gt;# Analyze commits from last 7 days only&lt;/span&gt;
docweave analyze &lt;span class="nt"&gt;--days&lt;/span&gt; 7
&lt;span class="c"&gt;# or short: docweave analyze -d 7&lt;/span&gt;

&lt;span class="c"&gt;# Combine options&lt;/span&gt;
docweave analyze &lt;span class="nt"&gt;--path&lt;/span&gt; ./my-repo &lt;span class="nt"&gt;--limit&lt;/span&gt; 15 &lt;span class="nt"&gt;--days&lt;/span&gt; 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📁 Generated Documentation
&lt;/h2&gt;

&lt;p&gt;DocWeave creates a &lt;code&gt;DocweaveDocs/&lt;/code&gt; folder in your repository with:&lt;/p&gt;

&lt;h2&gt;
  
  
  🏗️ Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/docweave/
├── cli.py              # CLI entrypoint and command handling
├── components/         # Reusable components
│   ├── copilot_integration.py  # GitHub Copilot CLI integration
│   └── doc_generator.py        # Documentation generation
├── features/           # Business logic
│   └── commit_analysis.py      # Git commit analysis
├── lib/                # Utilities
│   ├── copilot_check.py        # Copilot CLI availability check
│   ├── repo_utils.py           # Repository path utilities
│   └── utils.py               # General utilities
└── types/              # Type definitions
    └── models.py               # Data models
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqnfl88j5v3008ca16bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqnfl88j5v3008ca16bu.png" alt="Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;Before Copilot CLI, I had to constantly ask LLMs or IDEs to explain what they understood, request a specific format, provide templates for generating documentation, and ensure everything was correct. Now, with &lt;strong&gt;GitHub Copilot&lt;/strong&gt;, I can be confident that—as a n*&lt;em&gt;ative GitHub development&lt;/em&gt;* tool—it fully analyzes all the information available in the repo and &lt;strong&gt;generates content&lt;/strong&gt; strictly following the specified templates.&lt;/p&gt;

&lt;p&gt;GitHub Copilot CLI provides AI-powered analysis that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understands code context and intent&lt;/li&gt;
&lt;li&gt;Provides deeper insights into why changes were made&lt;/li&gt;
&lt;li&gt;Suggests more relevant next steps&lt;/li&gt;
&lt;li&gt;Generates better summaries&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Trading CV- A different and interactive way to share your skills.</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Fri, 30 Jan 2026 17:04:19 +0000</pubDate>
      <link>https://forem.com/julsr_mx/trading-cv-a-different-and-interactive-way-to-share-your-skills-476o</link>
      <guid>https://forem.com/julsr_mx/trading-cv-a-different-and-interactive-way-to-share-your-skills-476o</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi &lt;strong&gt;DEV&lt;/strong&gt;elopers!&lt;br&gt;
I'm Julian, a Data Analyst and Software Engineer with 9 years of experience. Lately, I've been learning a lot about how to apply my knowledge of gaining insights from data to create models and be more accurate when trading! So, I did my best to create my CV as a trading system, where you can explore and view all charts, embedded Power BI reports, and my experience in one single trading console. Enjoy!&lt;/p&gt;
&lt;h2&gt;
  
  
  Portfolio
&lt;/h2&gt;



&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://cv-juls-v3-762834586558.us-central1.run.app/"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;LINK:&lt;a href="https://cv-juls-v3-762834586558.us-central1.run.app" rel="noopener noreferrer"&gt;Interactive Trading CV&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h2&gt;
  
  
  🚀 Project Overview
&lt;/h2&gt;

&lt;p&gt;This portfolio leverages the visual language of stock market dashboards (Bloomberg Terminal, TradingView) to present professional experience. Instead of a static PDF, users interact with "assets" (skills), view "market performance" (career growth), and read "analyst reports" (case studies).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Design Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Google Stack:&lt;/strong&gt; Antigravity and AI Studio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme:&lt;/strong&gt; Dark Mode, Cyberpunk/Financial aesthetics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interaction:&lt;/strong&gt; Real-time feedback, pulsing indicators, and glassmorphism.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data-Driven:&lt;/strong&gt; Every UI element represents a datapoint from over 9 years of experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Technology Stack
&lt;/h2&gt;

&lt;p&gt;This application is built with a modern, high-performance stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Framework:&lt;/strong&gt; &lt;a href="https://react.dev/" rel="noopener noreferrer"&gt;React&lt;/a&gt; with &lt;a href="https://www.typescriptlang.org/" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt; (Vite)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Styling:&lt;/strong&gt; &lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;Tailwind CSS&lt;/a&gt; for utility-first styling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Animations:&lt;/strong&gt; &lt;a href="https://www.framer.com/motion/" rel="noopener noreferrer"&gt;Framer Motion&lt;/a&gt; for fluid transitions and modal effects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Visualization:&lt;/strong&gt; &lt;a href="https://recharts.org/" rel="noopener noreferrer"&gt;Recharts&lt;/a&gt; for the main career performance chart.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Icons:&lt;/strong&gt; &lt;a href="https://lucide.dev/" rel="noopener noreferrer"&gt;Lucide React&lt;/a&gt; for consistent, lightweight iconography.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📦 Features &amp;amp; Component Architecture
&lt;/h2&gt;

&lt;p&gt;The application is structured into a single-page dashboard containing several specialized modules:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Market Performance (Main Chart)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visuals:&lt;/strong&gt; Switchable Area Chart and Candlestick Chart.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data:&lt;/strong&gt; Visualizes career timeline vs. "Market Value" (Role seniority/Impact).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive:&lt;/strong&gt; Hover tooltips showing stack usage per role and company logos (Clearbit API).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markers:&lt;/strong&gt; "Dividend" markers representing major industries entered.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Skills Market Depth
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visuals:&lt;/strong&gt; Order book style list.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logic:&lt;/strong&gt; Skills are ranked by "Volume" (Years of Experience) and "Depth" (Competency Level).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assets:&lt;/strong&gt; SQL, Python, PowerBI, RPA, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Experience Feed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visuals:&lt;/strong&gt; Scrolling news ticker / social feed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content:&lt;/strong&gt; Chronological history of roles, companies, and high-impact bullets (e.g., "+20% Efficiency").&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Analyst Reports (Gallery)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visuals:&lt;/strong&gt; Card grid with hover effects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function:&lt;/strong&gt; Opens "Reports" (Deep dives/Case studies) in an embedded modal view. Links to external PowerBI/Fabric dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Fundamentals (Bio &amp;amp; Stats)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Content:&lt;/strong&gt; Education, Certifications, Contact Links.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Footer:&lt;/strong&gt; Quick access to GitHub, LinkedIn, and Email.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'm Most Proud Of
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Trading features: You can feel like you're inside a trading interface, with my experience translated into trading info.&lt;/li&gt;
&lt;li&gt;Embedded reports from Power BI: Just click on the gallery images, and you can interact with the reports right there!&lt;/li&gt;
&lt;li&gt;Animations: Subtle and beautiful animations to catch your attention!&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>SleepSync - FullStack App made with Xano and Bubble</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Sun, 14 Dec 2025 20:43:46 +0000</pubDate>
      <link>https://forem.com/julsr_mx/sleepsync-fullstack-app-made-with-xano-and-bubble-cjh</link>
      <guid>https://forem.com/julsr_mx/sleepsync-fullstack-app-made-with-xano-and-bubble-cjh</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/xano-2025-11-20"&gt;Xano AI-Powered Backend Challenge&lt;/a&gt;: Full-Stack, AI-First Application&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;My app unifies sleep and health data from Oura and Whoop wearables into a "Truth Score™" (a unified metric, e.g., 0-100, calculated by averaging or weighting key metrics like sleep duration, readiness/recovery scores, HRV, resting heart rate, etc.). For this initial version, we'll simulate API integrations by allowing users to upload CSV files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div&gt;
  &lt;iframe src="https://loom.com/embed/fff1f436d22e4599950450f243552203"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;br&gt;
&lt;a href="https://juramirezve-12078.bubbleapps.io/uploads" rel="noopener noreferrer"&gt;Bubble Full Stack App&lt;/a&gt;
&lt;h2&gt;
  
  
  The AI Prompt I Used
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a complete Xano backend in XanoScript for a sleep data unification app called SleepSync. Include: Authentication (email/password signup/login endpoints returning JWT). Database tables: users (id, email, password_hash, created_at), oura_data (user_id, date, total_sleep_seconds, readiness_score, resting_heart_rate, hrv, etc. – base on Oura CSV fields like deep_sleep_duration, respiratory_rate), whoop_sleeps (user_id, cycle_start_time, asleep_duration_min, recovery_score, resting_heart_rate_bpm, hrv_ms, etc.), unified_data (user_id, date, unified_sleep_hours, unified_rhr, unified_hrv, truth_score). API Endpoints: POST /auth/signup (create user, hash password, generate user_id), POST /auth/login (validate, return JWT), POST /upload/csv (authenticated, accept file/type 'oura' or 'whoop', parse CSV, insert with user_id from JWT, normalize timestamps to UTC), GET /data/unified (authenticated, query by user_id/date range, compute unified metrics like unified_sleep_hours = avg(Oura total_sleep/3600, Whoop asleep_min/60); truth_score = avg(Oura readiness, Whoop recovery); store in unified_data). Use XanoScript syntax ($db, $auth). Handle errors, nulls (default 0). Assume CSVs have headers. Make scalable for multiple uploads. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  How I Refined the AI-Generated Code
&lt;/h2&gt;

&lt;p&gt;I found samples of the scripts on the documentation. These samples allowed me to use them in Cursor to be more specific and aligned to the information required by Xano.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I opened the XanoScript in my configuration settings on my workspace. Should I copy paste actual code there? Also, I found below code in the initial script

workspace SleepSyncBackend {
  acceptance = {ai_terms: false}
  preferences = {
    internal_docs    : false
    track_performance: true
    sql_names        : false
    sql_columns      : true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Let's do this only for OURA data. How should I create each one of the steps you mentioned? Should I configure something in Bubble or Xano? Consider this official guide:
https://www.xano.com/learn/connect-xano-bubble/ 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  My Experience with Xano
&lt;/h2&gt;

&lt;p&gt;Xano is really good to make a Backend usable and safe. The problems that I found is that I didn't find any Discord or social chat to interact with the community and nowadays that's frustrating due all the help shouldn't be left to documentation or an AI bot.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>xanochallenge</category>
      <category>ai</category>
      <category>backend</category>
    </item>
    <item>
      <title>🌟 My Experience at ETH New York 2025</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Mon, 25 Aug 2025 23:19:09 +0000</pubDate>
      <link>https://forem.com/julsr_mx/my-experience-at-eth-new-york-2025-3j83</link>
      <guid>https://forem.com/julsr_mx/my-experience-at-eth-new-york-2025-3j83</guid>
      <description>&lt;p&gt;10 days ago, I had the opportunity to attend ETH New York, an event hosted by the ETH Global foundation where hackers from around the globe gather to test their technical skills and compete for sponsor prizes. Over an intense 36 hours, I developed an MVP (Minimum Viable Product) for my project, Founil, which implements innovative features to tackle real-world blockchain challenges. 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 The Project: Founil and Uniswap v4 Hooks
&lt;/h2&gt;

&lt;p&gt;My project, hosted at &lt;a href="https://github.com/Juls95/founil" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, leverages Uniswap, one of the most prominent decentralized exchange (DEX) protocols built on Ethereum. The goal is to address the growing issue of scam tokens and fraudulent schemes in the blockchain space, affecting both lesser-known and socially prominent projects. 💻&lt;/p&gt;

&lt;p&gt;Founil introduces a groundbreaking funding platform that combines NFTs (Non-Fungible Tokens) and Liquidity Pools to create a donation and reward mechanism. NFTs serve as a record of contributions from donors, while Uniswap liquidity pools provide market liquidity, generate transaction revenue, and enable project creators to withdraw funds in a controlled manner, preventing potential scams. 🪙&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ How It Works: Liquidity Bootstrapped Pool
&lt;/h2&gt;

&lt;p&gt;Using Uniswap v4 hooks, I implemented a Liquidity Bootstrapped Pool. Users swap a token, such as ETH, for a collateral token defined by the project creator. This collateral funds the liquidity pool, which is released gradually to prevent bots from buying low and selling high—a common issue in today’s blockchain ecosystem.&lt;/p&gt;

&lt;p&gt;Once the liquidity pool is funded, the hooks capture transaction fees, initially distributed to the project creator. A portion of these fees is reinvested into the pool via Auto Compound, growing the pool over time. Additionally, the afterSwap hook automatically mints NFTs, recording who donated, when, and how much. These NFTs allow for donor verification and can be traded or held for future project benefits. 🎮&lt;/p&gt;

&lt;h2&gt;
  
  
  🔒 Transparency and Security
&lt;/h2&gt;

&lt;p&gt;A core feature of Founil is transparency. Project creators must provide evidence of how and where funds are used and define a clear mechanism for gradual fund withdrawals. This ensures donors are not defrauded and that projects deliver on their promises, fostering trust within the community. ✅&lt;/p&gt;

&lt;p&gt;What do you think of Founil? &lt;strong&gt;Any advice to win a Hackathon prize next time?&lt;/strong&gt; Share your thoughts or suggestions in the comments to help improve this project! 🚀&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>web3</category>
      <category>hackathon</category>
      <category>uniswap</category>
      <category>nfts</category>
    </item>
    <item>
      <title>Submission deadline</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Sun, 24 Aug 2025 19:12:41 +0000</pubDate>
      <link>https://forem.com/julsr_mx/submission-deadline-j1h</link>
      <guid>https://forem.com/julsr_mx/submission-deadline-j1h</guid>
      <description>&lt;p&gt;Hello!&lt;/p&gt;

&lt;p&gt;I've looked into the official challenge post (&lt;a href="https://dev.to/challenges/brightdata-n8n-2025-08-13"&gt;https://dev.to/challenges/brightdata-n8n-2025-08-13&lt;/a&gt;) and the submission date is August 31&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl95hcb2crkpesvzby9nx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl95hcb2crkpesvzby9nx.png" alt=" " width="777" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But in this post &lt;a href="https://dev.to/devteam/new-demo-videos-from-the-n8n-and-bright-data-teams-for-the-real-time-ai-agents-challenge-3of3"&gt;https://dev.to/devteam/new-demo-videos-from-the-n8n-and-bright-data-teams-for-the-real-time-ai-agents-challenge-3of3&lt;/a&gt; says it's on 07 September. Could anyone confirm the exact dates for the submission deadline, please?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbogl2lzobkhpzsfqratq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbogl2lzobkhpzsfqratq.png" alt=" " width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>n8nbrightdatachallenge</category>
    </item>
    <item>
      <title>Your personal Blockchain tutor powered by AssemblyAI</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Sun, 27 Jul 2025 02:58:53 +0000</pubDate>
      <link>https://forem.com/julsr_mx/your-personal-blockchain-tutor-powered-by-assemblyai-1dek</link>
      <guid>https://forem.com/julsr_mx/your-personal-blockchain-tutor-powered-by-assemblyai-1dek</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/assemblyai-2025-07-16"&gt;AssemblyAI Voice Agents Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Overview of the Voice Agent and Prompt Categories Addressed
&lt;/h3&gt;

&lt;p&gt;The voice-based Crypto Education Agent is designed to provide interactive, personalized learning experiences in cryptocurrency topics through natural voice interactions. It uses AssemblyAI for real-time speech-to-text (STT) transcription, handling domain-specific jargon like "BTC," "DeFi," and "zero-knowledge proofs" with high accuracy. The transcribed queries are processed via a Retrieval-Augmented Generation (RAG) pipeline built with LlamaIndex, which retrieves factual information from a curated knowledge base stored in Pinecone vector database. Responses are generated using Anthropic's Claude LLM, ensuring detailed, hallucination-free explanations. The agent also learns from conversations by indexing dialogue history, enabling personalized follow-ups, such as building on prior queries about staking to explain related DeFi concepts.&lt;/p&gt;

&lt;p&gt;Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low-latency interactions&lt;/strong&gt;: Achieving ~300ms transcription via AssemblyAI's Universal-Streaming, making it feel like a real-time conversation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-specific enhancements&lt;/strong&gt;: Word boosts in AssemblyAI improve transcription accuracy for crypto terms to 95%+.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Educational tools&lt;/strong&gt;: Supports use cases like concept explanations, quizzes, market insights, and interactive quizzes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing and scalability&lt;/strong&gt;: Validated through 5 structured use cases with metrics for accuracy, relevance, and personalization; built for future extensions like live market APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This agent primarily addresses the following prompt categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain Expert&lt;/strong&gt;: It acts as a specialized tutor in cryptocurrency education, drawing from a tailored knowledge base to explain complex topics (e.g., blockchain fundamentals or NFT terms) accurately and contextually, reducing hallucinations through RAG.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Performance&lt;/strong&gt;: Leveraging AssemblyAI's streaming STT for low-latency voice input, it enables seamless, interactive sessions where users can speak queries and receive immediate responses, with conversation learning for dynamic personalization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While it could indirectly support &lt;strong&gt;Business Automation&lt;/strong&gt; (e.g., automating educational workflows in trading advisory), the core focus is on expert knowledge delivery and performant real-time interactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/YfIN-HNV6Nc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Repository
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Juls95" rel="noopener noreferrer"&gt;
        Juls95
      &lt;/a&gt; / &lt;a href="https://github.com/Juls95/assembly_ai" rel="noopener noreferrer"&gt;
        assembly_ai
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AssemblyAI RAG with Learning System&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A comprehensive system that combines AssemblyAI's Universal-Streaming technology with Retrieval-Augmented Generation (RAG) using LlamaIndex, Anthropic, and Pinecone. This project enables real-time speech-to-text transcription, semantic search, and conversational AI with long-term learning capabilities.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🏗️ Project Structure&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;assembly_ai/
├── assembly.py                 # Main AssemblyAI streaming implementation
├── build_rag_index.py         # Pinecone index construction
├── rag_with_learning.py       # RAG chat engine with learning
├── transcribe_test_audio.py   # Batch audio transcription
├── crypto_kb/                 # Knowledge base documents
├── test_audio/                # Audio test samples
├── test_cases.json           # Test case definitions
└── .env                      # Environment variables (create this)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Speech-to-Text&lt;/strong&gt;: Using AssemblyAI's Universal-Streaming API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-Specific Word Boost&lt;/strong&gt;: Enhanced recognition for crypto/finance terminology&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Pipeline&lt;/strong&gt;: Semantic search with Pinecone and LlamaIndex&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term Learning&lt;/strong&gt;: Conversation history integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Testing&lt;/strong&gt;: Audio transcription accuracy evaluation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Modal Input&lt;/strong&gt;: Support for both live audio and pre-recorded files&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;📺 Demo Video&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Watch a demonstration…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Juls95/assembly_ai" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Technical Implementation &amp;amp; AssemblyAI Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Using AssemblyAI's capabilities
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_batch_tests&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Use the API key from Assembly
&lt;/span&gt;    &lt;span class="n"&gt;aai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;
    &lt;span class="c1"&gt;# Use the word_boost list
&lt;/span&gt;    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;TranscriptionConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;word_boost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;word_boost&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Defines config fot the Assembly usage.
&lt;/span&gt;    &lt;span class="n"&gt;transcriber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Transcriber&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test_audio/test_cases.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;test_cases&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;case&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;test_cases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;transcript&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;transcriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transcribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;case&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filename&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Expected: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;case&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;expected_transcript&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Got: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;transcript&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Iterates through a set of test cases, each with an audio file (.wav) and an expected transcript simulating a real time conversation.&lt;/li&gt;
&lt;li&gt;Transcribes each file and prints both the expected and actual results for comparison.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides a simple but effective way to evaluate transcription accuracy and system performance.&lt;/li&gt;
&lt;li&gt;Supports regression testing and quality assurance for the speech-to-text pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Testing AssemblyAI's real time processing
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;microphone_stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;extras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MicrophoneStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sample_rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;microphone_stream&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Audio chunk read:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;break&lt;/span&gt;  &lt;span class="c1"&gt;# Remove break to keep reading, this is just to test
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;microphone_stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This snippet tests live microphone audio capture by reading a chunk and printing its size.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;break&lt;/code&gt; is used to only read one chunk for debugging.&lt;/li&gt;
&lt;li&gt;After confirming audio capture, the stream is passed to AssemblyAI for real-time transcription.&lt;/li&gt;
&lt;li&gt;Not a priority for MVP but implementing in future versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensures that the microphone is correctly configured and audio is being captured before starting a full streaming session.&lt;/li&gt;
&lt;li&gt;Helps debug device index and sample rate issues, which are common in cross-platform audio applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Testing AssemblyAI's accuracy and storing date for future process
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;transcripts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;case&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;test_cases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;audio_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;case&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filename&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;transcript&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;transcriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transcribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;audio_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;transcripts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;transcript&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Transcript for &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;audio_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;transcript&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Save transcripts for RAG queries
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;transcripts.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;transcripts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This code batch-transcribes a set of audio files, storing the resulting transcripts.&lt;/li&gt;
&lt;li&gt;Each transcript is printed and then saved to a JSON file for later use.&lt;/li&gt;
&lt;li&gt;Imagine each transcript is the result of the real-time conversation with a normal user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enables the integration of audio data into the RAG pipeline, allowing spoken content to be indexed and retrieved semantically.&lt;/li&gt;
&lt;li&gt;Supports workflows where knowledge is captured from meetings, podcasts, or other audio sources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating core vector index for RAG system
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PineconeVectorStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pinecone_index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pinecone_index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;storage_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StorageContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_defaults&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;VectorStoreIndex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;storage_context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;storage_context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embed_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embed_model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This code builds the core vector index for the RAG system.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PineconeVectorStore&lt;/code&gt; connects LlamaIndex to a managed Pinecone vector database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;StorageContext&lt;/code&gt; manages how data is stored and retrieved.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VectorStoreIndex.from_documents&lt;/code&gt; ingests all documents using the specified embedding model (here, HuggingFace's &lt;code&gt;all-MiniLM-L6-v2&lt;/code&gt;), ensuring all vectors are compatible with the Pinecone index dimension.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the foundation for semantic search and retrieval in your RAG pipeline.&lt;/li&gt;
&lt;li&gt;Using HuggingFace embeddings ensures local, API-free operation and avoids OpenAI dependencies.&lt;/li&gt;
&lt;li&gt;Ensures that all downstream retrieval and chat operations are based on a robust, scalable vector store.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Simulating human conversation with the agent (AssemblyAI, LlamaIndex, Anthropic)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Simulate conversation loop
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;queries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chat_engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Uses history
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Query: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Update index with convo for long-term learning
&lt;/span&gt;    &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;llama_index.core&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Document&lt;/span&gt;
    &lt;span class="n"&gt;convo_doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; | Agent: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;convo_doc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This loop simulates a user-agent conversation.&lt;/li&gt;
&lt;li&gt;Each query is sent to the chat engine, which uses the RAG index and LLM (Anthropic) to generate a response.&lt;/li&gt;
&lt;li&gt;After each exchange, the conversation is wrapped as a &lt;code&gt;Document&lt;/code&gt; and inserted into the index, enabling the system to "learn" from new interactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Importance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Demonstrates how the system can perform continual learning, updating its knowledge base with new conversational data.&lt;/li&gt;
&lt;li&gt;This enables adaptive, context-aware responses and supports use cases like personalized assistants or evolving knowledge bases.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>assemblyaichallenge</category>
      <category>ai</category>
      <category>api</category>
    </item>
    <item>
      <title>Algolia MCP enriches your Data through Claude Natural Language!🚀🍀🥇</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Thu, 24 Jul 2025 13:29:41 +0000</pubDate>
      <link>https://forem.com/julsr_mx/algolia-mcp-enriches-your-data-through-claude-natural-language-i9h</link>
      <guid>https://forem.com/julsr_mx/algolia-mcp-enriches-your-data-through-claude-natural-language-i9h</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia-2025-07-09"&gt;Algolia MCP Server Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;The primary use case revolves around automating the enrichment of Inc5000 company data stored in an Algolia index. Inc5000 refers to the annual list of the fastest-growing private companies in the United States, published by Inc. Magazine. The data includes attributes like company name, industry, growth rate, and years on the list (yrs_on_list). The goal is to enhance this dataset with AI-generated insights to make it more valuable for search, analysis, or business intelligence purposes.This enrichment is powered by Anthropic's Claude AI model, integrated via API calls within an n8n workflow&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/pV6RO6BN4P4"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Juls95" rel="noopener noreferrer"&gt;
        Juls95
      &lt;/a&gt; / &lt;a href="https://github.com/Juls95/algoliaMCP" rel="noopener noreferrer"&gt;
        algoliaMCP
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Algolia Node.js MCP – n8n Integration Guide&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://youtu.be/pV6RO6BN4P4" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/487cab05f2925705e5f2bfce83828cd82fa56b18708d66f2870d52b722e3feeb/68747470733a2f2f696d672e796f75747562652e636f6d2f76692f705636524f36424e3450342f302e6a7067" alt="n8n Integration Video"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Watch the video above for a walkthrough of the n8n workflow integration process.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 Quick Start: n8n Workflow Integration&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;This guide explains how to connect Algolia Node.js MCP to your custom &lt;a href="https://n8n.io/" rel="nofollow noopener noreferrer"&gt;n8n&lt;/a&gt; workflows, enabling you to trigger automations from Claude Desktop or any MCP-compatible client.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;1. Clone and Install MCP&lt;/h3&gt;
&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;git clone https://github.com/Juls95/algoliaMCP.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; algoliaMCP
npm install&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;2. Configure and Start MCP&lt;/h3&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authenticate&lt;/strong&gt; with Algolia:
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;node src/app.ts authenticate&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start the MCP server:&lt;/strong&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;node src/app.ts start-server&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;3. Configure Claude Desktop (or MCP Client)&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;Add this to your Claude Desktop configuration:&lt;/p&gt;
&lt;div class="highlight highlight-source-json notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;{
  &lt;span class="pl-ent"&gt;"mcpServers"&lt;/span&gt;: {
    &lt;span class="pl-ent"&gt;"algolia-mcp"&lt;/span&gt;: {
      &lt;span class="pl-ent"&gt;"command"&lt;/span&gt;: &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;&amp;lt;PATH_TO_BIN&amp;gt;/node&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
      &lt;span class="pl-ent"&gt;"args"&lt;/span&gt;: [
        &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;--experimental-strip-types&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
        &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;--no-warnings=ExperimentalWarning&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;,
        &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;"&lt;/span&gt;&amp;lt;PATH_TO_PROJECT&amp;gt;/src/app.ts&lt;span class="pl-pds"&gt;"&lt;/span&gt;&lt;/span&gt;
      ]
    }
  }
}&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Restart Claude Desktop after saving changes.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🆕 How the n8n Workflow Tool Works&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;The tool &lt;code&gt;triggerN8nWorkflow&lt;/code&gt; is included by default and registered automatically.&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Juls95/algoliaMCP" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Utilized the Algolia MCP Server
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Algolia MCP Server: A local server that exposes Algolia's APIs (search, update, analysis) as tools accessible via natural language.

&lt;ul&gt;
&lt;li&gt;Claude Desktop: Anthropic's app that integrates MCP as a custom tool, allowing users to converse with Claude and invoke Algolia actions dynamically.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Launch MCP Server&lt;/strong&gt;: Run the Algolia MCP locally, configured with Algolia API keys. It listens for prompts and translates them into API calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrate with Claude Desktop&lt;/strong&gt;: Add MCP as a tool in Claude Desktop's settings, enabling Claude to use it for tasks involving Algolia.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue Natural Language Commands&lt;/strong&gt;: In Claude Desktop, prompt something like: "Search Algolia for companies with yrs_on_list:17, enrich with research on trends and categories, then update the index."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Processes the Prompt&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Claude interprets the intent.&lt;/li&gt;
&lt;li&gt;Invokes MCP tools: First, a search tool to fetch data, parses the data and iterates over each item&lt;/li&gt;
&lt;li&gt;Uses its own reasoning or API calls for enrichment (e.g., generating summaries/trends).&lt;/li&gt;
&lt;li&gt;Calls MCP's update tool to push changes back to Algolia.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How It Worked Step by Step
&lt;/h3&gt;

&lt;p&gt;The system operates in two main phases: the core n8n workflow to fetch and update data form Algolia and the integration with Claude Desktop + Algolia MCP for agentic enhancements. Below is a detailed, sequential breakdown based on the provided details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjhmmr0q8nw11iujnpxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjhmmr0q8nw11iujnpxb.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup Overview&lt;/strong&gt;:

&lt;ol&gt;
&lt;li&gt;Wait for the message from Claude with the commands from the user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link to n8n&lt;/strong&gt;: The agentic flow can trigger or extend the n8n workflow. For example, Claude sends information to n8n using the MCP for further agentic processing.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fetching Data from Algolia&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;An HTTP Request node sends a POST request to Algolia's search endpoint: &lt;code&gt;/1/indexes/inc5000/query&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The request body includes filters like &lt;code&gt;"yrs_on_list:17"&lt;/code&gt; to target companies with exactly 17 years on the list.&lt;/li&gt;
&lt;li&gt;Additional parameters:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;distinct=1&lt;/code&gt;: Ensures only one record per company (deduplication).&lt;/li&gt;
&lt;li&gt;Custom ranking: &lt;code&gt;desc(year)&lt;/code&gt; to prioritize the most recent entries.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hitsPerPage:5&lt;/code&gt;: Limits results per page for manageable batches.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;attributesToRetrieve: ["objectID", "company_name", "industry", "growth_rate", "yrs_on_list"]&lt;/code&gt;: Retrieves only essential fields to optimize performance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Pagination is handled by starting with &lt;code&gt;page=0&lt;/code&gt;. After processing a page, an IF condition checks if &lt;code&gt;nbHits &amp;gt; (page+1)*hitsPerPage&lt;/code&gt;; if true, the page increments, and the workflow loops back to fetch the next page.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Iterating Over Fetched Records&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The fetched data (an array of "hits") is split using an Item Lists node to break it into individual items.&lt;/li&gt;
&lt;li&gt;A Loop Over Items node (using splitInBatches v3) processes each item one by one with &lt;code&gt;batchSize=1&lt;/code&gt;. This ensures sequential, reliable handling, avoiding parallel API overloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enriching Each Company with Claude API&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For each company record, an HTTP Request node calls Anthropic's Claude API: POST &lt;code&gt;/v1/messages&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Model used: &lt;code&gt;claude-3-5-sonnet-20240620&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The prompt instructs Claude to generate:

&lt;ul&gt;
&lt;li&gt;A 100-word summary based on the company's details (e.g., name, industry, growth rate).&lt;/li&gt;
&lt;li&gt;Predicted trends (e.g., market expansions or challenges).&lt;/li&gt;
&lt;li&gt;Categories (e.g., sector classifications).&lt;/li&gt;
&lt;li&gt;Tags and scores (e.g., 'Fast-Growing' with a score of 8/10, 'Tech Innovator' with 7/10).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The prompt specifies JSON output for easy parsing.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Parsing the Claude Response&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Set node extracts fields from Claude's JSON response: &lt;code&gt;summary&lt;/code&gt;, &lt;code&gt;trends&lt;/code&gt;, &lt;code&gt;categories&lt;/code&gt;, &lt;code&gt;objectID&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;This node merges the new enriched data with the original Algolia record.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Updating Algolia with Enriched Data&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another HTTP Request node uses Algolia's partialUpdateObject API (single-object endpoint) for reliability.&lt;/li&gt;
&lt;li&gt;It sends updates one by one using the Admin API Key.&lt;/li&gt;
&lt;li&gt;This avoids batch failures; if one update fails, others continue.&lt;/li&gt;
&lt;li&gt;Key fix: Ensured the JSON body is correctly formatted without unnecessary &lt;code&gt;JSON.parse&lt;/code&gt; on arrays.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Aggregating Enriched Records Across Iterations and Pages&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As the loop processes items and pages, a Merge node (in "All Item Data" mode) accumulates all enriched records into a global array.&lt;/li&gt;
&lt;li&gt;A Set node helps manage this aggregation, ensuring data from multiple pages and iterations is collected without loss.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Sending Completion Email&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once all processing is done, a Send Email node (v2.1) uses SMTP (e.g., Gmail with App Password) to notify the user.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Key Challenges Resolved During Implementation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Iteration Issues&lt;/strong&gt;: Initial problems with looping over arrays were fixed by using Item Lists to split hits and setting batchSize=1 for per-item processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregation&lt;/strong&gt;: Switched to "All Item Data" mode in Merge nodes, with careful pagination accumulation to build a complete list.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update Failures&lt;/strong&gt;: Batch updates were unreliable, so switched to single-object endpoints; also corrected JSON body handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email Formatting&lt;/strong&gt;: Used array methods (.map/.join) to create readable lists in HTML.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>Data Culture in the Tech Era</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Mon, 14 Jul 2025 19:57:14 +0000</pubDate>
      <link>https://forem.com/julsr_mx/data-culture-in-the-tech-era-54l5</link>
      <guid>https://forem.com/julsr_mx/data-culture-in-the-tech-era-54l5</guid>
      <description>&lt;p&gt;Working in data and technology is highly sought after these days, especially with the massive implementation of tech tools and the vast amount of information generated. In this post, I'll share relevant aspects that I believe should be taken into account before starting deliverables and requesting projects with internal teams. 🚀📊&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Culture
&lt;/h3&gt;

&lt;p&gt;It is the foundation of all data implementation. Beyond releasing new features every day or implementing new technologies, we must be able to understand and assist the organization in a general way. Primarily, we need to foster a healthy data culture in which the entire team of a company is involved, not just executive levels, but also managerial and collaborative ones. This will make it clear who the users involved in the structure of data project definition will be, as well as the processes that will be improved and the technology that will be necessary, from which we can gradually dispense.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sponsors
&lt;/h3&gt;

&lt;p&gt;Now, once we have clarity from the strategic part on the above, what follows? Well, it's important to get people who are our executive sponsors. This person will be key, as they will be our right hand to bring down requirements from high positions to know what they need right now. Added to that, we'll have a voice that will allow us to express our concerns, requirements, and comments to keep everything aligned with the data culture and business objectives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Business Alignment
&lt;/h3&gt;

&lt;p&gt;Added to the executive sponsor, having business alignment is what will give us the opportunity to know how everything we do is solving real problems, contributing to a better work area, facilitating problem-solving, and optimizing time and resources. This business alignment should be focused on understanding from the business side what they are currently doing, using, what hurts them, what they lack, and why they use what they currently have. This last point must be primordial to improve current processes and not work redundantly in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Content Management
&lt;/h3&gt;

&lt;p&gt;Now, the relevance of understanding what strategy works for us in the business is where everything will take shape. Where do we start? What works for us? How do we divide and assign tasks and jobs? Right here, we analyze everything obtained from the three topics previously raised. According to Microsoft (&lt;a href="https://learn.microsoft.com/en-us/power-bi/guidance/fabric-adoption-roadmap-content-ownership-and-management" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/power-bi/guidance/fabric-adoption-roadmap-content-ownership-and-management&lt;/a&gt;), we can have three ways to handle this content: Business-led self-service BI, managed self-service BI, and Enterprise BI. Depending on the strategy we choose, it will influence our operational model, our CoE, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Delivery Scope
&lt;/h3&gt;

&lt;p&gt;Now that we know how the team will be managed and what strategy benefits us, we'll review how we'll deliver the reports. For this, we have possible four levels of scope, starting from fewer to more users: (Personal, Team, Departmental, and Enterprise).&lt;/p&gt;

&lt;p&gt;Conversely, for creating content at each stage, it is expected that the greater the number of consumers, the fewer creators per level. This is to have greater control and governance over the reports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Center of Excellence
&lt;/h3&gt;

&lt;p&gt;To clearly manage the delivery of reports, we must create a CoE that allows us to evangelize the company's people with our data-driven culture, train them, guide them, and educate them so that they have good practices when creating their reports and the risks of incorrect use of data or visualizations are minimized. 🔍💡&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance
&lt;/h3&gt;

&lt;p&gt;It is a very open topic; however, it can be summarized that its objective is to govern what users do with the data, ensuring that the company's data is well managed. For this, we must promote users to be able to work with certain internal measures that align with the organization, have consistency in documentation, deploying, and standardization of processes, and added to this, have a reduction in possible data leaks beyond the limits set by the organization and the BI team.&lt;/p&gt;

&lt;p&gt;What do you think about these aspects? Share your thoughts in the comments and let's have a productive discussion!&lt;/p&gt;

</description>
      <category>data</category>
      <category>analytics</category>
      <category>datascience</category>
      <category>devdiscuss</category>
    </item>
    <item>
      <title>Hackathon scanner to spark your next big idea!</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Mon, 07 Jul 2025 00:39:37 +0000</pubDate>
      <link>https://forem.com/julsr_mx/hackathon-scanner-to-spark-your-next-big-idea-3e1e</link>
      <guid>https://forem.com/julsr_mx/hackathon-scanner-to-spark-your-next-big-idea-3e1e</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/runnerh"&gt;Runner H "AI Agent Prompting" Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A hackathon scanner for DoraHacks to gain insights into which projects are winning and their themes. No more headaches searching for what to build—just get inspired, imagine, and execute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.loom.com/share/1ea83d13568c4d1383d99a691a264dad?sid=4d420111-864c-4db6-8adf-98829b2db0ac" rel="noopener noreferrer"&gt;Full Video w/excel&lt;/a&gt;&lt;br&gt;
&lt;a href="https://runner.hcompany.ai/chat/d50577dd-a456-4345-acfc-5fb506ab30d2/share" rel="noopener noreferrer"&gt;Runner H Process&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How I Used Runner H
&lt;/h2&gt;

&lt;p&gt;To start, I explored the DoraHacks hackathon page to determine how to retrieve data from completed hackathons with announced winners. Then, I identified the relevant information on the page and configured Runner H to process it. Below is a detailed explanation of the process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### Step 1: Get Ready
- Create a new Google Sheets file called "DoraHacks_Hackathon_Data".
- Make three sections (sheets) in the file:
  - Sheet 1: Hackathon Details
  - Sheet 2: Hackathon Winners
  - Sheet 3: Winner Projects

### Step 2: List the Hackathon Pages
- Use these five hackathon report pages:
  - https://dorahacks.io/hackathon/xion/report
  - https://dorahacks.io/hackathon/calimero-x-icp/report
  - https://dorahacks.io/hackathon/cambrian-hack/report
  - https://dorahacks.io/hackathon/aixfintech/report
  - https://dorahacks.io/hackathon/core-gaming-hackathon/report

### Step 3: Collect Information from Each Hackathon Page
For each hackathon page, do the following:

#### 3.1 Visit the Hackathon Page
- Go to the hackathon report page using the link.

#### 3.2 Get Hackathon Summary
- Find the "Hackathon Summary" section.
- Write down:
  - Hackathon name
  - Description
  - Prize amount
  - Start and end dates
  - Any other details (like number of participants or themes)
- Save this information with labels like:
  - Hackathon Name, Summary Description, Prize Amount, Dates, Other Details

#### 3.3 Get Hackathon Winners
- Find the "Hackathon Winners" section.
- For each winner, write down:
  - Winner or team name
  - Project name (if shown)
  - Prize or rank (like 1st or 2nd place)
  - Link to the winner’s project page on DoraHacks
- Save this information with labels like:
  - Hackathon Name, Winner Name, Project Name, Prize or Rank, Winner Link

#### 3.4 Get Organization Overview
- Find the "Organization Overview" section.
- Write down:
  - Organization name
  - Description
  - Any other details (like their goals or website)
- Save this information with labels like:
  - Hackathon Name, Organization Name, Organization Description, Other Details

### Step 4: Collect Information from Each Winner’s Project Page
For each winner’s project link found in Step 3.3:

#### 4.1 Visit the Winner’s Project Page
- Go to the winner’s project page using the link.

#### 4.2 Get Project Details
- Write down:
  - Project name (usually at the top)
  - Introduction (the text describing the project)
  - Tags (like DeFi or NFT, shown as labels or categories)
  - The project page link (the URL you’re on)
- Save this information with labels like:
  - Hackathon Name, Project Name, Introduction, Tags, Project Link

### Step 5: Put the Information into Google Sheets
Add the collected information to the Google Sheets file like this:

#### 5.1 Sheet 1: Hackathon Details
- Use these columns:
  - Hackathon Name
  - Summary Description
  - Prize Amount
  - Dates
  - Other Summary Details
  - Organization Name
  - Organization Description
  - Other Organization Details
  - Hackathon Link
- For each hackathon, add one row with the information from Steps 3.2 and 3.4, plus the hackathon page link.

#### 5.2 Sheet 2: Hackathon Winners
- Use these columns:
  - Hackathon Name
  - Winner Name
  - Project Name
  - Prize or Rank
  - Winner Link
- For each winner from Step 3.3, add one row.

#### 5.3 Sheet 3: Winner Projects
- Use these columns:
  - Hackathon Name
  - Project Name
  - Introduction
  - Tags
  - Project Link
- For each winner project from Step 4.2, add one row.

### Step 6: Handle Missing Information
- If a section (like Hackathon Summary) isn’t there, write "Not available" in that field.
- If a winner’s link doesn’t work, skip it and note the hackathon name and link as a problem.
- Make sure every field has something, even if it’s "N/A", to keep the sheets consistent.

### Step 7: Save and Share the Google Sheets File
- Save all the information in the Google Sheets file.
- Create a link to the file that others can view.
- Share the link with the user.

## Final Output
- The result is a Google Sheets file named "DoraHacks_Hackathon_Data" with three sheets filled with the collected information.
- The user gets a link to view the file.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Use Case &amp;amp; Impact
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Developers: Easily discover which projects are winning in recent hackathons, enabling clearer decisions on what to build without spending excessive time starting from scratch. Identify winning projects and use them as inspiration for your own.&lt;/li&gt;
&lt;li&gt;Investors: Find potential investment opportunities or talent in hackathons with significant prizes or specific themes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Social Love
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://x.com/julsr_mx/status/1942020484779905467" rel="noopener noreferrer"&gt;https://x.com/julsr_mx/status/1942020484779905467&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>runnerhchallenge</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Intelligent AI Data agent</title>
      <dc:creator>Julian</dc:creator>
      <pubDate>Sat, 05 Jul 2025 22:18:58 +0000</pubDate>
      <link>https://forem.com/julsr_mx/intelligent-ai-data-agent-3aeb</link>
      <guid>https://forem.com/julsr_mx/intelligent-ai-data-agent-3aeb</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/runnerh"&gt;Runner H "AI Agent Prompting" Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Get the insights and data to invest in a smart way! Runner H is capable of simulating the action of going to CoinMarketCap, organizing the data in descending order and starting to explore the top 10. Likewise, it enters the X account of each project with the intention of obtaining their followers and following. All of this is summarized and added to an Excel file to view information quickly and consolidated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://hcompany.ai/surfer-2" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fframerusercontent.com%2Fassets%2F7eFHjmJeoNnvAhCSfhNTx0E8Njc.jpg" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://hcompany.ai/surfer-2" rel="noopener noreferrer" class="c-link"&gt;
            Surfer 2: The Next Generation of Cross-Platform Computer-Use Agents - H Company
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            We are a frontier AI research company that designs, builds, and deploys cost-efficient agentic AI systems directly into enterprises’ core workflows and processes.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fframerusercontent.com%2Fimages%2FqVilPhQQTJzvgGDNLjrYwa5xQ.png"&gt;
          hcompany.ai
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Used Runner H
&lt;/h2&gt;

&lt;p&gt;Runner H is capable of simulating the action of going to CoinMarketCap, organizing the data in descending order and starting to explore the top 10. Likewise, it enters the X account of each project with the intention of obtaining their posts and interactions. All of this is summarized and added to an Excel file to view information quickly and consolidated.&lt;/p&gt;

&lt;p&gt;Below is a step-by-step guide which explains how the agent works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Step 1: Access and Sort the DeFi Page
- Open your web browser and navigate to the following URL: https://coinmarketcap.com/view/defi/.
- Locate the sorting option on the page (typically found near the column headers).
- Sort the list by "Volume" in descending order (highest to lowest) to identify the top-performing DeFi assets based on trading volume.

## Step 2: Extract Data from the Top 10 Assets
- From the sorted list, identify the first 10 assets.
- For each of these assets, click on their respective links to visit their individual pages.
- On each asset's page, gather the following information:
  - **Market Cap**: The total market capitalization of the asset.
  - **Volume**: The 24-hour trading volume.
  - **Price**: The current price of the asset.
  - **Project Name**: The official name of the project.
  - **How It’s Used**: A brief description of the project’s purpose or utility (e.g., decentralized finance applications, staking, etc.).
- Calculate the **Percentage Change of Volume** for each asset over the last 24 hours (this data is often available on the page; note the change and determine if it’s positive or negative).
- Create a column labeled **Percentage Indicator** and mark it as "Positive" or "Negative" based on the volume change.

## Step 3: Gather Additional Information from the About Page
- For each of the 10 assets, modify the URL https://coinmarketcap.com/currencies/[symbol]/#About by replacing [symbol] with the asset’s unique symbol (e.g., https://coinmarketcap.com/currencies/uni/#About for Uniswap).
- Visit each modified URL and extract:
  - **About Title**: The title or headline of the "About" section.
  - **About Description**: A summary or detailed explanation of the project from the "About" section.

## Step 4: Create and Populate a Google Sheets Document
- Open Google Sheets and create a new spreadsheet.
- Name the first tab "DeFi_Analysis".
- Set up the following columns in the first row:
  - Symbol
  - Volume
  - Percentage Change
  - Percentage Indicator
  - Market Cap
  - Price
  - About Title
  - About Description
- Enter the collected data for each of the 10 assets into the corresponding rows under these columns.

## Step 5: Collect Twitter Information
- For each of the 10 assets, visit their official Twitter (X) profiles (links are often provided on their CoinMarketCap pages or through a quick search).
- Record the following details:
  - **Twitter Handle**: The asset’s official Twitter username (e.g., @Uniswap).
  - **Followers**: The number of followers the account has.
  - **Following**: The number of accounts the asset follows.
- Create a new tab in the same Google Sheets document and name it "Twitter_Data".
- Set up the following columns in the first row of the "Twitter_Data" tab:
  - Symbol
  - Twitter Handle
  - Followers
  - Following
- Enter the collected Twitter data for each of the 10 assets into the corresponding rows.

## Step 6: Review and Save
- Double-check all entered data for accuracy across both tabs.
- Save the Google Sheets document with a descriptive name (e.g., "DeFi_Analysis_2025-07-05") to ensure easy access for future reference.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Use Case &amp;amp; Impact
&lt;/h2&gt;

&lt;p&gt;Both investors and developers benefit from this agent as it saves them a lot of time by not having to search and go page by page identifying what's new in trending. With the exploration, it obtains important information to identify the relevance of the project as well as the activity on X, knowing if they are projects that are working actively and avoiding making an incorrect decision based on price hype but taking into account price data and interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Social Love
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://x.com/julsr_mx/status/1941622691389374647" rel="noopener noreferrer"&gt;https://x.com/julsr_mx/status/1941622691389374647&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>runnerhchallenge</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
