<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bhimashankar Patil</title>
    <description>The latest articles on Forem by Bhimashankar Patil (@shadow_b).</description>
    <link>https://forem.com/shadow_b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shadow_b"/>
    <language>en</language>
    <item>
      <title>Soon, AI Won’t Just Answer It’ll Run Your Life</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Wed, 11 Jun 2025 10:00:49 +0000</pubDate>
      <link>https://forem.com/shadow_b/soon-ai-wont-just-answer-itll-run-your-life-1j5m</link>
      <guid>https://forem.com/shadow_b/soon-ai-wont-just-answer-itll-run-your-life-1j5m</guid>
      <description>&lt;p&gt;You wake up, and your AI has already paid your phone bill, ordered milk because your smart fridge noticed you're running low, moved your 3 PM meeting to 4 PM because of traffic, and booked a table at your favourite restaurant for your anniversary next week. It even saw that you only got 4 hours of deep sleep and automatically ordered your usual coffee to be ready when you get to work&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9gmbhs0j62mpvoedg2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9gmbhs0j62mpvoedg2e.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Today's AI Just Talks. Tomorrow's AI Actually Does Things
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Handle Your Money
&lt;/h4&gt;

&lt;p&gt;Pay your Netflix bill, rent, and credit cards automatically&lt;br&gt;
Notice you spent $200 on coffee this month and suggest ways to save&lt;br&gt;
Move money from checking to savings when you have extra&lt;br&gt;
Text you: "Someone just charged $500 to your card at Best Buy. Was this you?"&lt;/p&gt;

&lt;h4&gt;
  
  
  Shop Like Your Personal Buyer
&lt;/h4&gt;

&lt;p&gt;Order toilet paper before you run out (it knows you use 2 rolls per week)&lt;br&gt;
Find the best deal on that laptop you've been wanting and buy it when the price drops&lt;br&gt;
Return that shirt that doesn't fit and order the right size&lt;br&gt;
Say "Hey, I found 5-star wireless headphones for $30. Want me to grab them?"&lt;/p&gt;

&lt;h4&gt;
  
  
  Plan Your Trips From Start to Finish
&lt;/h4&gt;

&lt;p&gt;Book your flight to Chicago, find a hotel near your meeting, and order an Uber for the airport&lt;br&gt;
Create a day-by-day plan for your Paris vacation based on your love of art museums and good food&lt;br&gt;
Remind you to renew your passport 6 months before your trip to Japan&lt;br&gt;
Rebook your delayed flight and update your hotel reservation automatically&lt;/p&gt;

&lt;h4&gt;
  
  
  Keep You Healthy
&lt;/h4&gt;

&lt;p&gt;Remind you to take your vitamins and track if you actually did it&lt;br&gt;
Schedule your annual checkup and send you the reminder&lt;br&gt;
Order healthy groceries based on your diet goals&lt;br&gt;
Notice your heart rate has been high this week and suggest you get more sleep&lt;/p&gt;

&lt;h4&gt;
  
  
  Manage Your Schedule Like a Pro
&lt;/h4&gt;

&lt;p&gt;Look at your calendar and automatically block 2 hours for deep work every morning&lt;br&gt;
Reschedule your dentist appointment when your work meeting runs long&lt;br&gt;
Send follow-up emails after your meetings with action items&lt;br&gt;
Know you're most productive at 10 AM and schedule important calls then&lt;/p&gt;

&lt;h3&gt;
  
  
  How This Actually Works
&lt;/h3&gt;

&lt;p&gt;This sounds like magic, but it's really just several technologies working together:&lt;/p&gt;

&lt;p&gt;Smart AI with Perfect Memory: Think of ChatGPT, but it remembers every conversation you've ever had, every preference you've mentioned, and everything about your life.&lt;/p&gt;

&lt;p&gt;AI That Can See and Hear: Your phone's camera becomes the AI's eyes, the microphone becomes its ears. It can read your emails, see your calendar, and understand your daily routine.&lt;/p&gt;

&lt;p&gt;AI That Can Use Apps: New technology lets AI actually click buttons, fill out forms, and use websites just like you do. It can log into your bank account, order from Amazon, or book flights on Expedia.&lt;/p&gt;

&lt;p&gt;Everything Has an API: Banks, airlines, food delivery apps—they're all building ways for AI to connect and take actions automatically.&lt;br&gt;
Your Phone + The Cloud: Some thinking happens on your phone for privacy, while the heavy computing happens in data centers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Life, All in One Place
&lt;/h3&gt;

&lt;p&gt;Imagine opening one app that shows you:&lt;br&gt;
How much money did you spend this month, and where did it go&lt;br&gt;
Your next trip to Seattle with the full itinerary already planned&lt;br&gt;
Your fitness goals and how you're doing (you walked 8,000 steps yesterday, goal is 10,000)&lt;br&gt;
Important emails you need to respond to and which ones your AI has already handled&lt;br&gt;
Your mom's birthday is next week, and the gift your AI has already ordered&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Like a Glove HTB: Decoding Metaphors with GloVe Embeddings</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Fri, 25 Apr 2025 16:18:07 +0000</pubDate>
      <link>https://forem.com/shadow_b/like-a-glove-htb-decoding-metaphors-with-glove-embeddings-4p67</link>
      <guid>https://forem.com/shadow_b/like-a-glove-htb-decoding-metaphors-with-glove-embeddings-4p67</guid>
      <description>&lt;h2&gt;
  
  
  The Challenge:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Words carry semantic information. Similar to how people can infer meaning based on a word's context, AI can derive representations for words based on their context too! However, the kinds of meaning that a model uses may not match ours. We've found a pair of AIs speaking in metaphors that we can't make any sense of! The embedding model is glove-twitter-25. Note that the flag should be fully ASCII ans starts with 'htb{some_text}'.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0dh60gs3lgzhzskvpe0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0dh60gs3lgzhzskvpe0.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ever wondered how AI understands metaphors and analogies? This Hack The Box challenge threw me into a linguistic maze filled with strange word pairs and metaphorical riddles. The twist? It had to be solved using &lt;strong&gt;GloVe Twitter embeddings&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;Each line follows the analogy format:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;A is to B, as C is to ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These were weird combinations, mixing English, Unicode characters, emojis, and foreign scripts. We’re told that the embedding model in use is:&lt;code&gt;glove-twitter-25&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Goal:
&lt;/h2&gt;

&lt;p&gt;Infer the missing fourth term using word embeddings and extract the final flag which must be ASCII and start with &lt;code&gt;htb{}&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools &amp;amp; Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Model: &lt;code&gt;glove-twitter-25&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Library: &lt;code&gt;gensim&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Input: &lt;code&gt;challenge.txt&lt;/code&gt; (a list of analogies)&lt;/li&gt;
&lt;li&gt;Output: &lt;code&gt;flag.txt&lt;/code&gt; (the inferred flag characters)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import re
from gensim.models import KeyedVectors

def load_glove_model():
    model_path = "glove.twitter.27B/glove.twitter.27B.25d.txt"
    model = KeyedVectors.load_word2vec_format(model_path, binary=False, no_header=True)
    return model

def parse_challenge(file_path, model):
    with open(file_path, 'r') as file:
        lines = file.readlines()

    results = []
    flag_characters = []

    for i, line in enumerate(lines):        
        match = re.search(r"Like (.+?) is to (.+?), (.+?) is to\?", line.strip())
        if not match:
            match = re.search(r"Like (.+) is to (.+), (.+) is to\?", line.strip())
            if not match:
                continue

        key, value, query = match.groups()
        key = key.strip()
        value = value.strip()
        query = query.strip()
        print(f"Extracted: '{key}' -&amp;gt; '{value}', '{query}' -&amp;gt; ?")       
        try:
            missing_words = []
            for word in [key, value, query]:
                if word not in model:
                    missing_words.append(word)           
            if missing_words:
                print(f"Skipping due to missing words: {missing_words}")
                continue
            # This performs the vector math   
            result_vector = model[value] - model[key] + model[query]
            closest_word = model.most_similar(positive=[result_vector], topn=1)[0][0]

            print(f"Closest match for '{query}' is '{closest_word}'")

            flag_characters.append((i, query, closest_word))
        except KeyError as e:
            print(f"Error: {e}")
            continue


    mapped_chars = [char[2] for char in flag_characters]

    potential_flag = ''.join(mapped_chars)
    print(f"Potential flag sequence: {potential_flag}")

    normalized_flag = potential_flag
    replacements = {
        '０': '0', '１': '1', '２': '2', '３': '3', '４': '4',
        '５': '5', '６': '6', '７': '7', '８': '8', '９': '9'
    }

    for non_ascii, ascii_char in replacements.items():
        normalized_flag = normalized_flag.replace(non_ascii, ascii_char)

    print(f"Normalized flag: {normalized_flag}")
    return normalized_flag

if __name__ == "__main__":
    challenge_file = "challenge.txt"
    model = load_glove_model()
    flag_sequence = parse_challenge(challenge_file, model)
    print("FINAL FLAG:")
    print(flag_sequence)

    # Create a clean output file with just the flag
    with open('flag.txt', 'w') as flag_file:
        flag_file.write(flag_sequence)
    print(f"Flag has been saved to flag.txt")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Steps to Run:
&lt;/h3&gt;

&lt;p&gt;1) Place the GloVe files in "glove.twitter.25B/".&lt;br&gt;
2) Run "python main.py" to process "challenge.txt".&lt;br&gt;
3) The resulting flag is written to "flag.txt".&lt;/p&gt;

&lt;p&gt;This challenge was a fun mix of NLP, embeddings, and CTF logic. It’s not every day you have AIs “speaking in metaphors,” and it was fascinating to reverse-engineer that conversation!&lt;/p&gt;

&lt;h5&gt;
  
  
  Let me know if you faced the same challenge — I’d love to compare notes!
&lt;/h5&gt;

&lt;h4&gt;
  
  
  Github Link for solution: &lt;a href="https://github.com/bhimapatil/glove_challenge_htb" rel="noopener noreferrer"&gt;Github Link&lt;/a&gt;
&lt;/h4&gt;

</description>
      <category>hackthebox</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>datascience</category>
    </item>
    <item>
      <title>The Art of Data Storytelling: What I Learned</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Wed, 23 Apr 2025 19:52:34 +0000</pubDate>
      <link>https://forem.com/shadow_b/the-art-of-data-storytelling-what-i-learned-4pj4</link>
      <guid>https://forem.com/shadow_b/the-art-of-data-storytelling-what-i-learned-4pj4</guid>
      <description>&lt;p&gt;We all know the pain of sitting through a presentation filled with charts that don’t tell us much or worse, confuse us more than clarify. I recently read &lt;strong&gt;Storytelling with Data&lt;/strong&gt; and it completely shifted the way I think about communicating insights. This isn’t just about pretty visuals. It’s about &lt;strong&gt;clear, meaningful communication&lt;/strong&gt;, especially when you're working with project sponsors, business users, or delivery teams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvthz5r5rx5by9rwkufr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvthz5r5rx5by9rwkufr.png" alt="Image description" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Know Your People First
&lt;/h2&gt;

&lt;p&gt;Before diving into dashboards or diagrams, pause and ask: &lt;strong&gt;Who is this for?&lt;/strong&gt; That clarity changes everything.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project sponsors&lt;/strong&gt; need confidence that your solution maps to the business goals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business users&lt;/strong&gt; want functionality that makes their lives easier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delivery teams&lt;/strong&gt; care about feasibility and handoffs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don’t know your audience’s &lt;em&gt;goals, pain points, and decision power&lt;/em&gt;, it’s like planning a trip without a destination. Tailoring your message to each stakeholder group ensures your work stays relevant and actionable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Explanatory Analysis: Go from Messy to Meaningful
&lt;/h2&gt;

&lt;p&gt;This is where the rubber meets the road. Explanatory analysis is about refining raw input—data, conversations, assumptions into &lt;strong&gt;clear, prioritized requirements&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s a simple guide:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2wvdaszy1j14ija5b3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2wvdaszy1j14ija5b3u.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Who?&lt;/strong&gt; Which stakeholder or user group has this need?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What?&lt;/strong&gt; What specific outcome or capability are they after?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How?&lt;/strong&gt; In what process or context will they use this?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’re basically turning chaos into clarity. It’s detective work, really listening, translating, and simplifying.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 3-Minute Story: Nail the Essence Fast
&lt;/h2&gt;

&lt;p&gt;Imagine you're in a kickoff meeting, and time is tight. You’ve got just &lt;em&gt;three minutes&lt;/em&gt; to explain a key requirement. What do you say?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndtyi62po9zj5ejsej10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndtyi62po9zj5ejsej10.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is your 3-minute story. No fluff. No jargon.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start with the core need.&lt;/strong&gt; What does this user or team absolutely need to succeed?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explain the benefit.&lt;/strong&gt; How will this requirement improve things—faster workflow, reduced risk, better decision-making?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This mini-story helps you (and everyone else) zoom in on what matters most. If you can’t explain it clearly in three minutes, you probably haven’t understood it well enough yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Use a Storyboard Before You Get Fancy
&lt;/h2&gt;

&lt;p&gt;Before writing specs or building wireframes, sketch out the &lt;strong&gt;user flow&lt;/strong&gt;—simple slides or diagrams that show how users interact with the solution.&lt;/p&gt;

&lt;p&gt;Think of it like mapping a journey:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrjat0f3al1ojo1finij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrjat0f3al1ojo1finij.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These early sketches make a huge difference. They force you to think logically and spot missing pieces &lt;em&gt;before&lt;/em&gt; things get complex (and costly).&lt;/p&gt;




&lt;h2&gt;
  
  
  Gestalt Principles: Design That Feels Right
&lt;/h2&gt;

&lt;p&gt;You don’t need to be a graphic designer to make your visuals work harder. Just use some basic &lt;strong&gt;Gestalt principles&lt;/strong&gt; to help the eye make sense of things.&lt;/p&gt;

&lt;p&gt;Here’s what that looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhknrlw4f0lu40f1famr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhknrlw4f0lu40f1famr.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are small tweaks, but they can make your requirements diagrams or dashboards &lt;em&gt;instantly clearer&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tell the Whole Story, Step by Step
&lt;/h2&gt;

&lt;p&gt;When presenting your requirements—whether in a deck, a doc, or a meeting—follow a logical narrative:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Context:&lt;/strong&gt; What business goals and challenges are we trying to solve?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key Findings:&lt;/strong&gt; What data or research tells us there’s a need?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requirements:&lt;/strong&gt; What exactly do we need to build or deliver?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; What are the expected benefits or risks tied to these requirements?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next Steps:&lt;/strong&gt; What’s the plan to validate, prioritize, and assign ownership?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty3yn2c67jayogryvpeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty3yn2c67jayogryvpeq.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This flow keeps things grounded and moves the conversation forward. It’s not just “Here’s the data,” but “Here’s what it means, and what we’re doing next.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Reading &lt;em&gt;Storytelling with Data&lt;/em&gt; reminded me that &lt;strong&gt;clarity is everything&lt;/strong&gt;—and that applies just as much to requirements as it does to analytics. Whether you’re writing user stories, building a roadmap, or presenting to the execs, the ability to &lt;strong&gt;tell a focused, visual, and human-centered story&lt;/strong&gt; makes all the difference.&lt;/p&gt;

&lt;p&gt;Next time you’re knee-deep in a project, ask yourself: &lt;em&gt;What story am I telling—and who needs to hear it?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>analyst</category>
      <category>datascience</category>
      <category>visualization</category>
      <category>data</category>
    </item>
    <item>
      <title>Building a Multi-Agent Conversational AI System.</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Thu, 10 Apr 2025 09:08:41 +0000</pubDate>
      <link>https://forem.com/shadow_b/building-a-multi-agent-conversational-ai-system-with-amazon-bedrock-3elp</link>
      <guid>https://forem.com/shadow_b/building-a-multi-agent-conversational-ai-system-with-amazon-bedrock-3elp</guid>
      <description>&lt;p&gt;As AI systems become more sophisticated, we're moving beyond the "one model handles everything" approach. Today, I'll share how I built a conversational AI system that uses specialized agents for different tasks - all powered by Amazon Bedrock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6khxwfdl6vr2sxc32cp.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6khxwfdl6vr2sxc32cp.webp" alt="Image description" width="768" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Generic AI Assistants
&lt;/h2&gt;

&lt;p&gt;Have you ever had a conversation with an AI chatbot that kept forgetting details or mixed up information from different topics? It's frustrating, right?&lt;/p&gt;

&lt;p&gt;Generic AI assistants try to handle everything—from booking cabs to tracking orders to answering random questions—in one conversational flow. This often leads to context confusion and poor user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter the Multi-Agent Architecture
&lt;/h2&gt;

&lt;p&gt;To solve this, I created a system that uses specialized AI agents that focus on specific tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A main coordinator agent that handles general queries and routes requests&lt;/li&gt;
&lt;li&gt;A cab booking specialist agent&lt;/li&gt;
&lt;li&gt;An order tracking specialist agent&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  System Architecture
&lt;/h2&gt;

&lt;p&gt;Here's how the system works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user interacts with the main agent initially&lt;/li&gt;
&lt;li&gt;Based on intent detection, the conversation gets transferred to a specialized agent&lt;/li&gt;
&lt;li&gt;The specialized agent handles its specific task until completion&lt;/li&gt;
&lt;li&gt;The user can switch between agents or return to the main menu&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Building the System with Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Let's dive into the implementation. I used Amazon Bedrock with Claude 3.5 Sonnet as the underlying LLM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging
import json
import boto3
import requests
import datetime
import random
from botocore.exceptions import ClientError

# Configure logging
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)

# Create the Bedrock client
def get_bedrock_client():
    return boto3.client(
        service_name='bedrock-runtime',
        aws_access_key_id="YOUR_ACCESS_KEY",
        aws_secret_access_key="YOUR_SECRET_KEY",
        region_name="REGION"
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tool Configuration&lt;br&gt;
The system uses function calling (or "tools" in Amazon Bedrock terminology) to perform actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tool_config = {
    "tools": [
        {
            "toolSpec": {
                "name": "book_cab",
                "description": "Book a cab between locations",
                "inputSchema": {
                    "json": {
                        "type": "object",
                        "properties": {
                            "pickup": {
                                "type": "string",
                                "description": "Pickup location (e.g., Andheri, Bandra, Dadar)"
                            },
                            "destination": {
                                "type": "string",
                                "description": "Drop-off location (e.g., Airport, Powai, Worli)"
                            },
                            "time": {
                                "type": "string",
                                "description": "Pickup time in HH:MM format"
                            },
                            "passengers": {
                                "type": "integer",
                                "description": "Number of passengers"
                            }
                        },
                        "required": ["pickup", "destination"]
                    }
                }
            }
        },
        {
            "toolSpec": {
                "name": "track_order",
                "description": "Track the status of an order",
                "inputSchema": {
                    "json": {
                        "type": "object",
                        "properties": {
                            "order_id": {
                                "type": "string",
                                "description": "The order ID to track (e.g., ORD12345)"
                            }
                        },
                        "required": ["order_id"]
                    }
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Intent Detection&lt;br&gt;
The system needs to identify what the user wants to do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def detect_intent(text):
    text = text.lower()
    if any(keyword in text for keyword in ['book', 'cab', 'taxi', 'ride', 'uber', 'ola']):
        return 'cab_booking'
    elif any(keyword in text for keyword in ['order', 'track', 'package', 'delivery', 'shipment']):
        return 'order_tracking'
    else:
        return 'unknown'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Cab Booking Agent&lt;br&gt;
Let's look at how the cab booking agent is implemented:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def cab_booking_agent(bedrock_client, model_id, tool_config, conversation_history=None):
    print("Transferring to Cab Booking Agent...")
    print("-" * 50)

    # Start with a clean conversation history
    if conversation_history:
        messages = clean_conversation_history(conversation_history)
    else:
        messages = [{
            "role": "user",
            "content": [{
                "text": """You are a specialized cab booking agent. Focus only on helping users book cabs.
                Ask for the following information if not provided: pickup location, destination, pickup time, 
                and number of passengers. Be conversational but efficient."""
            }]
        }]

    # Add specialized prompt to make agent more focused
    specialized_prompt = {
        "role": "user",
        "content": [{
            "text": """You are now a specialized cab booking agent. Your responses should be direct and to the point.
            Just ask directly for any missing information needed to book a cab."""
        }]
    }
    messages.append(specialized_prompt)

    # Main conversation loop
    booking_complete = False
    first_interaction = True
    returning_to_main = False
    new_intent = None

    while not returning_to_main and not new_intent:
        try:
            # Get user input
            if first_interaction and conversation_history:
                user_input = ""
                first_interaction = False
            else:
                print("Cab Booking Agent: ", end="")
                user_input = input()

            # Check for exit commands or intent switching
            if user_input.lower() in ["exit", "quit", "bye", "cancel", "back", "return", "main menu"]:
                print("Cab Booking Agent: Returning to main menu.")
                returning_to_main = True
                break

            # Process conversation with model
            if user_input:
                messages.append({
                    "role": "user",
                    "content": [{"text": user_input}]
                })

            # Get model response
            response = bedrock_client.converse(
                modelId=model_id,
                messages=messages,
                toolConfig=tool_config
            )

            # Rest of conversation handling
            # ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Handling Tool Calls&lt;br&gt;
When the model decides to book a cab, it uses the tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if stop_reason == 'tool_use':
    has_tool_use = True
    tool_requests = response['output']['message']['content']

    for tool_request in tool_requests:
        if 'toolUse' in tool_request and tool_request['toolUse']['name'] == 'book_cab':
            tool = tool_request['toolUse']
            print("Processing your cab booking request...")

            try:
                booking_details = book_cab(
                    tool['input']['pickup'],
                    tool['input']['destination'],
                    tool['input'].get('time', '15:00'),
                    tool['input'].get('passengers', 1)
                )

                tool_result = {
                    "toolUseId": tool['toolUseId'],
                    "content": [{"json": booking_details}]
                }

                # Mark booking as complete
                booking_complete = True

            except CabNotFoundError as err:
                tool_result = {
                    "toolUseId": tool['toolUseId'],
                    "content": [{"text": f"I couldn't find cabs from {tool['input']['pickup']} to {tool['input']['destination']}. Please check the locations and try again."}],
                    "status": 'error'
                }

            # Send tool result back to model
            tool_result_message = {
                "role": "user",
                "content": [{"toolResult": tool_result}]
            }
            messages.append(tool_result_message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Main Application Loop&lt;br&gt;
The heart of the system is the main loop that coordinates between agents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def main():
    # Setup
    model_id = "YOUR MODEL ID"
    bedrock_client = boto3.client(service_name='bedrock-runtime', region_name="ap-south-1")

    print("I can help you with cab booking and order tracking.")

    # Initialize base conversation
    base_messages = [{
        "role": "user",
        "content": [{
            "text": """You are a helpful travel and shopping assistant. You can help with cab booking and order tracking.
            Keep your responses friendly and concise."""
        }]
    }]

    try:
        current_intent = None
        base_messages_copy = None

        while True:
            if current_intent is None:
                # When in main menu
                user_input = input("You: ")

                if user_input.lower() in ["exit", "quit", "bye"]:
                    print("Assistant: Goodbye! Have a great day!")
                    break

                # Add user input to conversation
                base_messages.append({
                    "role": "user",
                    "content": [{"text": user_input}]
                })

                # Detect intent from user input
                intent = detect_intent(user_input)
                base_messages_copy = base_messages.copy()

                if intent == 'cab_booking':
                    # Transfer to cab booking agent
                    current_intent = 'cab_booking'
                    next_intent = cab_booking_agent(bedrock_client, model_id, tool_config, base_messages)

                    if next_intent:
                        current_intent = next_intent
                    else:
                        current_intent = None  # Return to main menu

                elif intent == 'order_tracking':
                    # Transfer to order tracking agent
                    current_intent = 'order_tracking'
                    next_intent = order_tracking_agent(bedrock_client, model_id, tool_config, base_messages)

                    if next_intent:
                        current_intent = next_intent
                    else:
                        current_intent = None  # Return to main menu

                else:
                    # General conversation
                    response = bedrock_client.converse(
                        modelId=model_id,
                        messages=base_messages,
                        toolConfig=tool_config
                    )

                    # Display response
                    # ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Features That Make This System Special
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Context Isolation: Each specialized agent maintains its own conversation state, preventing context confusion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Seamless Transitions: Users can move between different agents without losing context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proactive Intent Detection: The system identifies when a user wants to switch topics and transfers them to the appropriate agent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Persistent Memory: The system remembers key information across transfers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error Handling: Robust error handling for API failures, invalid inputs, and edge cases.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The multi-agent approach represents the next evolution in conversational AI. By using specialized agents, we can create more focused, helpful, and reliable conversational experiences.&lt;/p&gt;

&lt;p&gt;The code shared here is just a starting point. You could expand this system with more specialized agents, better natural language understanding, and integration with real backend systems.&lt;/p&gt;

&lt;p&gt;What specialized agents would you build for your business? Let me know in the comments!&lt;/p&gt;

</description>
      <category>multiagent</category>
      <category>ai</category>
      <category>agentaichallenge</category>
    </item>
    <item>
      <title>Understanding MCP (Model Context Protocol) with Examples</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Tue, 01 Apr 2025 05:48:52 +0000</pubDate>
      <link>https://forem.com/shadow_b/understanding-mcp-model-context-protocol-with-examples-k75</link>
      <guid>https://forem.com/shadow_b/understanding-mcp-model-context-protocol-with-examples-k75</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; is a structured way to manage and exchange contextual information between AI models and applications. It allows AI systems to maintain state, remember previous interactions, and improve response relevance in multi-turn conversations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is MCP Needed?
&lt;/h2&gt;

&lt;p&gt;Most AI models process each request independently. Without context, they cannot recall previous interactions, leading to disjointed conversations. MCP solves this by providing a framework to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain conversation history&lt;/li&gt;
&lt;li&gt;Track user preferences&lt;/li&gt;
&lt;li&gt;Improve response accuracy&lt;/li&gt;
&lt;li&gt;Call external &lt;strong&gt;tools&lt;/strong&gt; and &lt;strong&gt;APIs&lt;/strong&gt; to enhance responses&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How MCP Works
&lt;/h2&gt;

&lt;p&gt;MCP operates using &lt;strong&gt;context objects&lt;/strong&gt; that store relevant details about an interaction. These context objects can include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Session ID:&lt;/strong&gt; Unique identifier for a conversation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Information:&lt;/strong&gt; Preferences, history, and settings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Previous Queries &amp;amp; Responses:&lt;/strong&gt; Helps maintain continuity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-Specific Knowledge:&lt;/strong&gt; Relevant facts that improve AI accuracy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API &amp;amp; Tool Calls:&lt;/strong&gt; Enables dynamic responses by fetching real-time data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example of MCP in Action
&lt;/h3&gt;

&lt;p&gt;Let's say we are building an AI assistant for customer support. Without MCP, the conversation might look like this:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Without MCP&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; "What's my order status?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI:&lt;/strong&gt; "Please provide your order ID."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; "It's #12345."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI:&lt;/strong&gt; "Your order is in transit."&lt;/p&gt;

&lt;p&gt;Here, the AI forgets the user after every message and needs additional input each time.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;With MCP&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Using MCP, we store the session details and user data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"session_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John Doe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"john@example.com"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"last_order_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12345"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"last_query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"order_status"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the conversation is smoother:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; "What's my order status?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI:&lt;/strong&gt; "Your last order (#12345) is in transit."&lt;/p&gt;

&lt;p&gt;Since the AI remembers the order ID from context, it eliminates the need to ask again.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using Tools and APIs in MCP&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;MCP also allows AI systems to call external &lt;strong&gt;APIs&lt;/strong&gt; and &lt;strong&gt;tools&lt;/strong&gt; dynamically. For example, if a user asks for real-time weather updates, MCP can fetch data from a weather API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"session_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xyz789"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"last_query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"weather"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"api_call"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://weatherapi.com/current"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"location"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"New York"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI can then return:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; "What's the weather like in New York?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI:&lt;/strong&gt; "It's 72°F and sunny in New York."&lt;/p&gt;

&lt;p&gt;By integrating API calls, MCP enables AI assistants to provide real-time, accurate responses beyond static knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing MCP
&lt;/h2&gt;

&lt;p&gt;To implement MCP, you need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Session Management&lt;/strong&gt; – Store session data using a database or memory cache (e.g., Redis).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Storage&lt;/strong&gt; – Maintain a structured context object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stateful APIs&lt;/strong&gt; – Modify API calls to include and update context data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool and API Integration&lt;/strong&gt; – Enable AI to fetch external data dynamically.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example: Context-Aware API in Python
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;context_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/chat&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;context_store&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;context_store&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]}&lt;/span&gt;

    &lt;span class="n"&gt;context_store&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generate_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context_store&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You said: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. Context length: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;history&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;MCP helps AI models maintain state and improve conversational flow. By structuring context information efficiently and integrating &lt;strong&gt;tools&lt;/strong&gt; and &lt;strong&gt;APIs&lt;/strong&gt;, AI assistants can provide more meaningful and personalized responses, leading to better user experiences.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What are your thoughts on MCP? Have you implemented something similar? Let me know in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>aiops</category>
    </item>
    <item>
      <title>How to Perform Audio Call QA Analysis Using the Sonnet Model and Deepgram API</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Fri, 21 Feb 2025 05:32:18 +0000</pubDate>
      <link>https://forem.com/shadow_b/how-to-perform-audio-call-qa-analysis-using-the-sonnet-model-and-deepgram-api-2chc</link>
      <guid>https://forem.com/shadow_b/how-to-perform-audio-call-qa-analysis-using-the-sonnet-model-and-deepgram-api-2chc</guid>
      <description>&lt;p&gt;Quality Assurance (QA) analysis for audio calls is crucial for businesses that rely on customer interactions. By analyzing customer support or sales calls, companies can improve customer experience, ensure compliance, and enhance agent performance. With AI-powered tools like Deepgram for speech-to-text conversion and Sonnet for intelligent analysis, automating QA analysis has become easier than ever.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xz10j9r0b9jqf3gmj17.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xz10j9r0b9jqf3gmj17.jpg" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Deepgram and Sonnet?
&lt;/h2&gt;

&lt;p&gt;Deepgram is an AI-powered speech-to-text platform known for its accuracy and real-time transcription capabilities. Sonnet, on the other hand, is a powerful AI model capable of analyzing text data and extracting meaningful insights. When combined, they offer a seamless way to process and analyze call recordings for QA purposes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to Perform QA Analysis on Audio Calls
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Convert Audio Calls to Text Using Deepgram&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

API_KEY = "your_deepgram_api_key"
AUDIO_FILE_PATH = "path_to_your_audio_file.wav"

with open(AUDIO_FILE_PATH, "rb") as audio:
    response = requests.post(
        "https://api.deepgram.com/v1/listen",
        headers={
            "Authorization": f"Token {API_KEY}",
            "Content-Type": "audio/wav",
        },
        data=audio,
    )
    transcript = response.json()["results"]["channels"][0]["alternatives"][0]["transcript"]
    print("Transcript:", transcript)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Analyze the Transcription Using Sonnet Model&lt;/strong&gt;&lt;br&gt;
Once we have the call transcript, we can analyze it for QA purposes using the Sonnet model. The Sonnet model can help with:&lt;/p&gt;

&lt;p&gt;-- Sentiment analysis (detecting customer and agent emotions)&lt;br&gt;
-- Keyword spotting (identifying compliance keywords)&lt;br&gt;
-- Issue detection (highlighting complaints or repeated concerns)&lt;br&gt;
-- Agent performance evaluation (checking script adherence)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

def analyze_text_with_sonnet(text):
    client = boto3.client("bedrock-runtime")
    response = client.invoke_model(
        modelId="sonnet-3.5",
        contentType="application/json",
        body={"prompt": f"Analyze the sentiment and key insights from this conversation: {text}"}
    )
    return response["output"]

transcription_text = "The customer was unhappy with the service and asked for a refund."
qa_results = analyze_text_with_sonnet(transcription_text)
print("QA Analysis:", qa_results)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Automating QA Analysis&lt;/strong&gt;&lt;br&gt;
-- n8n or Zapier for automation&lt;br&gt;
-- Metabase for visualizing trends in call analytics&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using Deepgram and the Sonnet model together can significantly improve the speed and accuracy of audio call QA analysis. With automated transcription and AI-powered analysis, businesses can gain better insights into customer interactions, ensure compliance, and enhance customer service quality.&lt;/p&gt;

&lt;p&gt;By implementing this workflow, you can save time, reduce manual QA efforts, and make data-driven decisions to improve customer satisfaction.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>datascience</category>
      <category>agents</category>
    </item>
    <item>
      <title>Check out this project on automated df analysis !!!!</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Fri, 13 Dec 2024 10:04:59 +0000</pubDate>
      <link>https://forem.com/shadow_b/check-out-this-project-on-automated-df-analysis--ei0</link>
      <guid>https://forem.com/shadow_b/check-out-this-project-on-automated-df-analysis--ei0</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/shadow_b" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1085035%2Fd0190071-48ff-4b99-9d35-3cd090617ee3.jpeg" alt="shadow_b"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/shadow_b/automating-data-analysis-with-python-a-hands-on-guide-to-my-project-5f1k" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Automating Data Analysis with Python: A Hands-On Guide to My Project&lt;/h2&gt;
      &lt;h3&gt;Bhimashankar Patil ・ Dec 13 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#datascience&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#analytics&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Automating Data Analysis with Python: A Hands-On Guide to My Project</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Fri, 13 Dec 2024 09:29:30 +0000</pubDate>
      <link>https://forem.com/shadow_b/automating-data-analysis-with-python-a-hands-on-guide-to-my-project-5f1k</link>
      <guid>https://forem.com/shadow_b/automating-data-analysis-with-python-a-hands-on-guide-to-my-project-5f1k</guid>
      <description>&lt;p&gt;Data analysis is crucial across industries, but handling raw data efficiently can be a daunting challenge. With this project, I created an Automated Data Analysis pipeline that simplifies data handling and transformation, making it faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8yox8wb14zyw171b897.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8yox8wb14zyw171b897.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Automated Data Analysis?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Manual processes are time-consuming and error-prone. To solve this, I developed a Python-based pipeline that automates these tasks while ensuring accuracy and scalability&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Add a UI to Automated Data Analysis?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While command-line tools are powerful, they can be intimidating for non-technical users. The new interactive UI bridges the gap, enabling analysts and business users to:&lt;/p&gt;

&lt;p&gt;Upload Excel files directly for analysis.&lt;br&gt;
Generate custom plots and statistical insights without writing code.&lt;br&gt;
Perform outlier detection and correlation analysis interactively.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Features Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;File Upload for Analysis&lt;/strong&gt;&lt;br&gt;
The interface lets you upload Excel files with a single click. &lt;br&gt;
Once uploaded, the app automatically Identifies numerical and &lt;br&gt;
categorical columns and display summary statistics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Plot Generation&lt;/strong&gt;&lt;br&gt;
Select any column and generate visualizations instantly. This is perfect for understanding trends and distributions in your data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Outlier Detection&lt;/strong&gt;&lt;br&gt;
The app supports outlier detection using methods like Z-Score. Set a threshold value, and it highlights outliers for further investigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Correlation Heatmap&lt;/strong&gt;&lt;br&gt;
Generate a heatmap to visualize correlations between numerical features, helping identify patterns and relationships.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pair Plot Generation&lt;/strong&gt;&lt;br&gt;
The pair plot feature offers a way to explore the relationships between multiple features in a dataset through scatter plots and distributions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Behind the Scenes: How the App Works&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;File Handling and Data Parsing:&lt;br&gt;
The uploaded Excel file is read into a pandas DataFrame for preprocessing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Plotting&lt;/strong&gt;&lt;br&gt;
Matplotlib and Seaborn are used to create dynamic visualizations based on user input.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Outlier Detection&lt;/strong&gt;&lt;br&gt;
The Z-Score method flags outliers beyond the specified threshold.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interactive Widgets&lt;/strong&gt;&lt;br&gt;
Streamlit widgets, such as dropdowns, sliders, and file upload buttons, allow users to interact with the app intuitively.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Future Enhancements&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Real-Time Data Streaming: Adding support for live data updates.&lt;/li&gt;
&lt;li&gt;Advanced Analytics: Incorporating machine learning models for predictions and clustering.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Automated Data Analysis project demonstrates the power of combining automation with interactivity. Whether you’re a business analyst or a data enthusiast, this tool simplifies exploring and analyzing datasets.&lt;/p&gt;

&lt;p&gt;UI Screenshots:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1p3tud1m6ykhman8k99u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1p3tud1m6ykhman8k99u.png" alt="Image description" width="648" height="673"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij0gnkxj4njf904i6srf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij0gnkxj4njf904i6srf.png" alt="Image description" width="611" height="887"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftarhzl6udz4ywegt5w9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftarhzl6udz4ywegt5w9r.png" alt="Image description" width="627" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyln0n3xq009ci30wbtk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyln0n3xq009ci30wbtk8.png" alt="Image description" width="709" height="825"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzlcmq1r8f2cayoalu7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzlcmq1r8f2cayoalu7q.png" alt="Image description" width="706" height="773"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Linkedin URL: &lt;a href="https://www.linkedin.com/posts/bhimashankar-patil-527795168_dataanalytics-datascience-python-activity-7273320071102439425-5e4Z/?utm_source=share&amp;amp;utm_medium=member_android" rel="noopener noreferrer"&gt;Click here to watch the video how it work&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>analytics</category>
      <category>python</category>
      <category>webdev</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Thu, 28 Nov 2024 05:38:14 +0000</pubDate>
      <link>https://forem.com/shadow_b/-3ikp</link>
      <guid>https://forem.com/shadow_b/-3ikp</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/shadow_b" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1085035%2Fd0190071-48ff-4b99-9d35-3cd090617ee3.jpeg" alt="shadow_b"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/shadow_b/building-a-simple-chatbot-with-llama2-chat-with-excel-2ll0" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Building a Simple Chatbot with Llama2 [Chat with Excel]&lt;/h2&gt;
      &lt;h3&gt;Bhimashankar Patil ・ Nov 28 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#nlp&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#llm&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#datascience&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Building a Simple Chatbot with Llama2 [Chat with Excel]</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Thu, 28 Nov 2024 05:17:37 +0000</pubDate>
      <link>https://forem.com/shadow_b/building-a-simple-chatbot-with-llama2-chat-with-excel-2ll0</link>
      <guid>https://forem.com/shadow_b/building-a-simple-chatbot-with-llama2-chat-with-excel-2ll0</guid>
      <description>&lt;p&gt;In this post, I’ll explain how I built a chatbot using the Llama2 model to query Excel data intelligently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs51nnludchfw1gzjg2th.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs51nnludchfw1gzjg2th.jpg" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What We’re Building
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Loads an Excel file.&lt;/li&gt;
&lt;li&gt;Splits the data into manageable chunks.&lt;/li&gt;
&lt;li&gt;Stores the data in a vector database for fast retrieval.&lt;/li&gt;
&lt;li&gt;Use a local Llama2 model to answer questions based on the 
content of the Excel file.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Prerequisites:
&lt;/h4&gt;

&lt;p&gt;Python (≥ 3.8)&lt;br&gt;
Libraries: langchain, pandas, unstructured, Chroma&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Install Dependencies
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%pip install -q unstructured langchain
%pip install -q "unstructured[all-docs]"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Step 2: Load the Excel File
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd

excel_path = "Book2.xlsx"
if excel_path:
    df = pd.read_excel(excel_path)
    data = df.to_string(index=False)
else:
    print("Upload an Excel file")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Step 3: Chunk the Data and Store in a Vector Database
&lt;/h4&gt;

&lt;p&gt;Large text data is split into smaller, overlapping chunks for effective embedding and querying. These chunks are stored in a Chroma vector database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import Chroma

text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100)
chunks = text_splitter.split_text(data)

embedding_model = OllamaEmbeddings(model="nomic-embed-text", show_progress=False)
vector_db = Chroma.from_texts(
    texts=chunks, 
    embedding=embedding_model,
    collection_name="local-rag"
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4: Initialize the Llama2 Model
&lt;/h4&gt;

&lt;p&gt;We use ChatOllama to load the Llama2 model locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_community.chat_models import ChatOllama

local_model = "llama2"
llm = ChatOllama(model=local_model)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5: Create a Query Prompt
&lt;/h4&gt;

&lt;p&gt;The chatbot will respond based on specific column names from the Excel file. We create a prompt template to guide the model&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import PromptTemplate

QUERY_PROMPT = PromptTemplate(
    input_variables=["question"],
    template="""You are an AI assistant. Answer the user's questions based on the column names: 
    Id, order_id, name, sales, refund, and status. Original question: {question}"""
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 6: Set Up the Retriever
&lt;/h4&gt;

&lt;p&gt;We configure a retriever to fetch relevant chunks from the vector database, which will be used by the Llama2 model to answer questions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.retrievers.multi_query import MultiQueryRetriever

retriever = MultiQueryRetriever.from_llm(
    vector_db.as_retriever(), 
    llm,
    prompt=QUERY_PROMPT
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 7: Build the Response Chain
&lt;/h4&gt;

&lt;p&gt;The response chain integrates:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A retriever to fetch context.&lt;/li&gt;
&lt;li&gt;A prompt to format the question and context.&lt;/li&gt;
&lt;li&gt;The Llama2 model to generate answers.&lt;/li&gt;
&lt;li&gt;An output parser to format the response.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

template = """Answer the question based ONLY on the following context:
{context}
Question: {question}
"""

prompt = ChatPromptTemplate.from_template(template)

chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 8: Ask a Question
&lt;/h4&gt;

&lt;p&gt;Now we’re ready to ask a question! Here’s how we invoke the chain to get a response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;raw_result = chain.invoke("How many rows are there?")
final_result = f"{raw_result}\n\nIf you have more questions, feel free to ask!"
print(final_result)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Sample Output
&lt;/h4&gt;

&lt;p&gt;When I ran the above code on a sample Excel file, here’s what I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Based on the provided context, there are 10 rows in the table.
If you have more questions, feel free to ask!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion:
&lt;/h4&gt;

&lt;p&gt;This approach leverages the power of embeddings and the Llama2 model to create a smart, interactive chatbot for Excel data. With some tweaks, you can extend this to work with other types of documents or integrate it into a full-fledged app!&lt;/p&gt;

&lt;h3&gt;
  
  
  Check working example with UI on my LinkedIn:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/posts/bhimashankar-patil-527795168_ai-datascience-machinelearning-activity-7234013668080766977-r2Uw/?utm_source=share&amp;amp;utm_medium=member_android" rel="noopener noreferrer"&gt; Introducing BChat Excel: A Conversational AI-Powered Tool for Excel File Interactions&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>nlp</category>
      <category>llm</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Exploring Async Deepgram API: Speech-to-Text using Python</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Mon, 21 Oct 2024 08:58:25 +0000</pubDate>
      <link>https://forem.com/shadow_b/exploring-async-deepgram-api-speech-to-text-using-python-5ckl</link>
      <guid>https://forem.com/shadow_b/exploring-async-deepgram-api-speech-to-text-using-python-5ckl</guid>
      <description>&lt;p&gt;Today will explore the Deepgram API for converting voice to text [transcription]. Whether building a voice assistant, transcribing meetings or creating a voice-controlled app, Deepgram makes it easier than ever to get started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1blmj5vj7aczhpo9xour.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1blmj5vj7aczhpo9xour.jpg" alt="Image description" width="704" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Deepgram?
&lt;/h2&gt;

&lt;p&gt;Deepgram is a powerful speech recognition platform that uses advanced machine learning models to transcribe audio in real-time. It offers an easy-to-use API that developers can integrate into their applications for tasks like transcribing phone calls, converting meetings into text, or even analyzing customer interactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Deepgram?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Accuracy: Deepgram boasts high accuracy rates thanks to its deep learning algorithms trained on vast datasets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Real-Time Transcription: Get instant results as you speak, perfect for live applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multiple Languages: Supports several languages and accents, making it versatile for global applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with Deepgram API
&lt;/h2&gt;

&lt;p&gt;Install - pip install httpx&lt;/p&gt;

&lt;h2&gt;
  
  
  Importing Required Libraries
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import httpx
import asyncio
import logging
import traceback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Defining the Asynchronous Function
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#recording_url: The URL of the audio file to be transcribed.
#callback_url: The URL to which Deepgram will send the #transcription results (optional).
#api_key: Your Deepgram API key.

async def transcribe_audio(recording_url: str, callback_url: str, api_key: str):
    url = "https://api.deepgram.com/v1/listen"

    # Define headers
    headers = {
        "Authorization": f"Token {api_key}"
    }

    # Define query parameters
    query_params = {
        "callback_method": "post",
        "callback": callback_url
    }

    # Define body parameters
    body_params = {
        "url": recording_url
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Sending the Asynchronous Request
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    logger.info(f"Sending request to {url} with headers: {headers}, query: {query_params}, body: {body_params}")

    async with httpx.AsyncClient(timeout=60.0) as client:
        try:
            # Make a POST request with query parameters and body
            response = await client.post(url, headers=headers, params=query_params, json=body_params)
            response.raise_for_status()  # Raise an error for HTTP error responses
            result = response.json()
            logger.info(f"Response received: {result}")

            return result

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We create an instance of httpx.AsyncClient with a timeout of 60 seconds. Using async with ensures that the client is properly closed after the block is executed.&lt;br&gt;
If the request is successful, we parse the JSON response and log it, then return the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Call back URL :
&lt;/h2&gt;

&lt;p&gt;You can use &lt;a href="https://webhook.site/" rel="noopener noreferrer"&gt;&lt;/a&gt; for sample call back URL for testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  conclusion:
&lt;/h2&gt;

&lt;p&gt;This structured approach highlights how to utilize asynchronous programming in Python to interact with the Deepgram API efficiently. By breaking the code into blocks and explaining each part, readers can better understand the implementation and how to adapt it to their own needs.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>python</category>
      <category>deepgram</category>
    </item>
    <item>
      <title>How to run python3 using Power Automate Desktop.(Step Wise Guide)</title>
      <dc:creator>Bhimashankar Patil</dc:creator>
      <pubDate>Mon, 15 Jan 2024 11:47:47 +0000</pubDate>
      <link>https://forem.com/shadow_b/how-to-run-python3-using-power-automate-desktopstep-wise-guide-4cdb</link>
      <guid>https://forem.com/shadow_b/how-to-run-python3-using-power-automate-desktopstep-wise-guide-4cdb</guid>
      <description>&lt;p&gt;Hi Folks, This blog is about how can we run a python3 script using Power Automate.&lt;br&gt;
As of now, Power Automate desktop supports Python 2.7 which has limitations.&lt;/p&gt;

&lt;p&gt;Prerequisites : &lt;br&gt;
1 - You need to have PAD installed. (obviously)&lt;br&gt;
2 - Python3 &lt;/p&gt;

&lt;p&gt;Step 1 - Create A flow with any Name that you like.&lt;/p&gt;

&lt;p&gt;Step 2 - Go to Action and search for "Open CMS Session".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhka4kxhncd1lerrk8bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhka4kxhncd1lerrk8bq.png" width="481" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3 - Inside that give the current working directory as a Parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5994irrv95so5vdxbv99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5994irrv95so5vdxbv99.png" alt="Image description" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step4 - Again search for "Write to CMD Session" in actions and drag it under "Open CMD Session"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbr3cih6lkixztv69d33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbr3cih6lkixztv69d33.png" alt="Image description" width="496" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In "Write to CMD Session" give the command to run your python3 script.&lt;br&gt;
example: cd.\PyProjects&amp;amp;&amp;amp; cd.my_script.py&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8diro5rom6as0e84606.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8diro5rom6as0e84606.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5 - Now again go to Actions and search for "Read from CMD Session" It will help you to see the output that is displayed on CMD.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnaz3hghp63mc58zcgco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnaz3hghp63mc58zcgco.png" alt="Image description" width="483" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcabngp1vf0ggfad0gth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcabngp1vf0ggfad0gth.png" alt="Image description" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the Parameter just pass the instance of "Open CMD Session"&lt;/p&gt;

&lt;p&gt;Now Save and Test your flow.&lt;/p&gt;

&lt;p&gt;If you face any problem feel free to comment your questions.&lt;/p&gt;

</description>
      <category>powerautomate</category>
      <category>automation</category>
      <category>python3</category>
    </item>
  </channel>
</rss>
