<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Chimobi Roland</title>
    <description>The latest articles on Forem by Chimobi Roland (@orc-1).</description>
    <link>https://forem.com/orc-1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/orc-1"/>
    <language>en</language>
    <item>
      <title>I Fixed My Slow Nginx + Gunicorn Setup — Here’s How It Became 3X Faster</title>
      <dc:creator>Chimobi Roland</dc:creator>
      <pubDate>Sat, 07 Jun 2025 16:30:14 +0000</pubDate>
      <link>https://forem.com/orc-1/i-fixed-my-slow-nginx-gunicorn-setup-heres-how-it-became-3x-faster-1j9f</link>
      <guid>https://forem.com/orc-1/i-fixed-my-slow-nginx-gunicorn-setup-heres-how-it-became-3x-faster-1j9f</guid>
      <description>&lt;p&gt;Before we dive in, a quick disclaimer — I’m not a professional DevOps engineer, just a developer who needed to fix a slow server. After spending hours tweaking, I want to share my experience so you don’t waste hours on something that should take five minutes.&lt;/p&gt;

&lt;p&gt;Now, here’s my setup: I’m running a Django app on a CentOS 9 EC2 Server. For my web server, I chose Nginx for three key reasons: performance, load balancing, and ease of configuration, along with my familiarity with it. I also used Gunicorn as the proxy server to manage the Django services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu59iblar1z4smrp0xp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu59iblar1z4smrp0xp0.png" alt="Image description" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem:&lt;/p&gt;

&lt;p&gt;While testing, I noticed something odd — a 3x speed difference between running the Django development server with Nginx routing traffic to it versus running it through Gunicorn.&lt;/p&gt;

&lt;p&gt;My first thought was that the application might be too heavy, so I enabled preload in Gunicorn. This preloads the app before forking worker processes, which can improve memory efficiency but doesn’t necessarily boost request speed. I also increased the number of workers from 2 to 4 to allow more concurrent requests and set the log level to error to reduce logging overhead.&lt;/p&gt;

&lt;p&gt;Despite these tweaks, the requests were still 3x slower, suggesting the bottleneck might be elsewhere — possibly due to Gunicorn’s default synchronous workers, Nginx’s configuration, or database/I/O delays.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1pup32lz6jeazebwywb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1pup32lz6jeazebwywb.png" alt="Image description" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is my Nginx and Gunicorn configuration. By default, the Nginx config is located in:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/nginx/conf.d/filename.conf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name **.**.***.***;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /home/ec2-user/src/g*****/media;
    }

    location / {
        proxy_pass http://localhost:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff46mnx5c423psxuwfe6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff46mnx5c423psxuwfe6p.png" alt="Image description" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created a service file to run Gunicorn as a detached background service, ensuring it starts automatically when the server restarts and can handle incoming requests reliably.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=ec2-user
Group=ec2-user
WorkingDirectory=/home/ec2-user/src/g*****
ExecStart=/home/ec2-user/src/g*****/vone/bin/gunicorn \
 --workers 4 \
 --timeout 60 \
 --bind 0.0.0.0:8000 \
 --access-logfile /home/ec2-user/src/g*****/access.log \
 --error-logfile /home/ec2-user/src/g*****/error.log \
 v1_0.wsgi:application
[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After hours of debugging and tracing, I ruled out Nginx as the direct cause of the issue. The real culprit turned out to be a misconfigured binding address in Gunicorn.&lt;/p&gt;

&lt;p&gt;I had set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--bind 0.0.0.0:8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in my Gunicorn config and a similar address in Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;proxy_pass http://localhost:8000;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This single misconfiguration drastically slowed down my service. After a deep dive, I learned that this setup forced the server to communicate over the network stack instead of using a more efficient Unix socket or loopback interface :-/&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To fix the issue, I followed three simple steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a Gunicorn Socket Service&lt;/strong&gt;&lt;br&gt;
First, I created a Gunicorn socket file to allow Gunicorn to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;communicate via a Unix socket:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/systemd/system/gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Update Gunicorn to Use the Socket&lt;/strong&gt;&lt;br&gt;
Next, I modified the Gunicorn service file to bind to the new socket instead of a network address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;— bind unix:/run/gunicorn.sock

[Unit]
Description=gunicorn daemon
After=network.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Service]
User=ec2-user
Group=ec2-user
WorkingDirectory=/home/ec2-user/src/g*****
ExecStart=/home/ec2-user/src/g*****/vone/bin/gunicorn \
 --workers 3 \
 --timeout 60 \
 --bind unix:/run/gunicorn.sock \
 --access-logfile /home/ec2-user/src/g*****/access.log \
 --error-logfile /home/ec2-user/src/g*****/error.log \
 v1_0.wsgi:application
[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Update Nginx to Use the Socket&lt;/strong&gt;&lt;br&gt;
Lastly, I then updated Nginx’s proxy_pass setting to communicate via the Unix socket:&lt;br&gt;
proxy_pass &lt;a href="http://unix:/run/gunicorn.sock" rel="noopener noreferrer"&gt;http://unix:/run/gunicorn.sock&lt;/a&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name **.**.***.***;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /home/ec2-user/src/g*****/media;
    }

    location / {
        proxy_pass http://unix:/run/gunicorn.sock;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After restarting the services using the below commands, the issue was eliminated by removing unnecessary network overhead, making the application much faster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sudo Systemctl restart Nginx
Sudo Systemctl restart Gunicorn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I ran the test again on Postman, and voila — the response time for Gunicorn was ~3X faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqs0236lajxyskarqrux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqs0236lajxyskarqrux.png" alt="Image description" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>nginx</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to build MCP with ChatGPT and Slack - A productivity bot that saves time by summarizing messages and sentiment.</title>
      <dc:creator>Chimobi Roland</dc:creator>
      <pubDate>Wed, 30 Apr 2025 14:13:50 +0000</pubDate>
      <link>https://forem.com/orc-1/how-to-build-mcp-with-chatgpt-and-slack-a-productivity-bot-that-saves-time-by-summarizing-5740</link>
      <guid>https://forem.com/orc-1/how-to-build-mcp-with-chatgpt-and-slack-a-productivity-bot-that-saves-time-by-summarizing-5740</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This step-by-step guide with complete code, will show you how to build a Slack bot, connect it to ChatGPT using an MCP server, and use it to summarize hundreds of messages in any Slack channel its added. The bot notes the dominant emotional tone and provides a count of total messages sent. Alternatively, you can clone the &lt;a href="https://github.com/ORC-1/mcp-chatgpt-slack-bot" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, &lt;a href="https://github.com/modelcontextprotocol/servers/tree/main/src/slack" rel="noopener noreferrer"&gt;create slack bot&lt;/a&gt;,install the dependencies, and run the project instantly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wghh19ih5mlm7lfh5xx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wghh19ih5mlm7lfh5xx.gif" alt="Image description" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, let’s get the basic — What is MCP? you may ask, like most people, you may have been hearing about MCP everywhere, maybe seen one or two mind blowing video of how a 3D software like blender was controlled using Claude AI?&lt;/p&gt;

&lt;p&gt;Think of MCP as the next upgrade in LLM. Initially, we had LLMs that could only answer questions based on what they were trained with. Remember the days of ChatGPT’s famous quote of&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I don’t have access to real-time data, and my knowledge is up to date only until September 2021.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then, as LLMs evolved, through tools, they could search the internet for more information and remain current with the latest events. Naturally, the problems with tools such as writing custom code for each integration gave birth to MCP, a critical 3rd phase in LLM evolution.&lt;/p&gt;

&lt;p&gt;MCP stands for Model Context Protocol, an open protocol created by Anthropic to enable easy integrations between tools, external data sources and even cross LLM, Think of it as your portable travel power adaptor that seamless connects you to any power outlet no matter what country you’re in.&lt;/p&gt;

&lt;p&gt;MCP works on a modular client-server architecture, where clients say Claude or Gemini establish individual connections with one or more servers that provide access to tools, data, or prompts. This setup allows developers to focus primarily on the client side, using standardized integrations to connect with various services like Slack, WhatsApp, and more, often with minimal manual configuration. This approach offers a significant advantage over traditional tooling methods, which typically require developers to write custom code for each integration.​&lt;/p&gt;

&lt;p&gt;What better way to understand MCP than to actually build one? Since I enjoy building things from scratch to get a full feel of the system, I’ll be writing both the server and the client.&lt;br&gt;
In a real production environment, you typically don’t need to write the server side yourself — the solution provider handles that, just like a regular vendor SDK. A lot of the code I’m using comes straight from the official MCP documentation, but I’ve made some modifications to work with ChatGPT, which is still my favorite LLM, simply because, over time, it’s learned how I like my questions answered, delivering just the right amount of detail with relatable examples based on our previous conversations, alright, without wasting more time, let’s build a Slack bot using MCP that can:&lt;/p&gt;

&lt;p&gt;Give us a daily summary of all the messages sent to the channel &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;organized by group&lt;/li&gt;
&lt;li&gt;The overall dominant emotional tone&lt;/li&gt;
&lt;li&gt;Total number of message sent&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Client&lt;/strong&gt;&lt;br&gt;
To get started, let’s begin by building the client.&lt;/p&gt;

&lt;p&gt;Step 1: Create a requirements.txt file and paste the following into it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;annotated-types==0.7.0
anthropic==0.49.0
anyio==4.9.0
certifi==2025.1.31
click==8.1.8
distro==1.9.0
h11==0.14.0
httpcore==1.0.8
httpx==0.28.1
httpx-sse==0.4.0
idna==3.10
jiter==0.9.0
mcp==1.6.0
openai==1.75.0
pydantic==2.11.3
pydantic-settings==2.9.1
pydantic_core==2.33.1
python-dotenv==1.1.0
sniffio==1.3.1
sse-starlette==2.2.1
starlette==0.46.2
tqdm==4.67.1
typing-inspection==0.4.0
typing_extensions==4.13.2
uvicorn==0.34.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Create a virtual environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virtualenv env --python=python3.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Activate the virtual enviroment using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source env/bin/activate or .env\Scripts\activate #windows 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Install package using the requirement.txt file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 5: Create a .env file and add the API key for the AI provider you’re using. For this example, we’re using ChatGPT, so you’ll need to include your &lt;a href="https://platform.openai.com/playground" rel="noopener noreferrer"&gt;OpenAI API key&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ANTHROPIC_API_KEY= 
OPENAI_API_KEY=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Copy the code below and paste it into a Python file named client.py.&lt;/p&gt;

&lt;p&gt;You’ll need to pass in the required parameters when running it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MCP server path(absolute path to the server file, we’d create this shortly)&lt;br&gt;
channel name (slack channel the bot has been added to)&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import asyncio
import sys
import os
import json
from typing import Optional
from openai import AsyncOpenAI
from openai.types.chat import ChatCompletionMessageParam
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from contextlib import AsyncExitStack
from dotenv import load_dotenv

load_dotenv() #Make sure you have the right key in you .env file

class MCPClient:
    def __init__(self):
        # Initializes the MCPClient instance
        # - Sets up an AsyncExitStack to manage async context clean-up
        # - Prepares OpenAI Async client (assumes OPENAI_API_KEY is set via .env)
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.openai_client = AsyncOpenAI()

    async def connect_to_server(self, server_script_path: str):
        """Connect to an MCP-compatible server launched via stdio.

        Supports either a Python or JavaScript server script. Initializes the client session and
        prints a list of available tools provided by the server.
        """
        is_python = server_script_path.endswith('.py')
        is_js = server_script_path.endswith('.js')
        if not (is_python or is_js):
            raise ValueError("Server script must be a .py or .js file")

        command = "python" if is_python else "node"
        server_params = StdioServerParameters(command=command, args=[server_script_path], env=None)

        # Launch the server and wrap communication in a session
        stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
        self.stdio, self.write = stdio_transport
        self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))

        await self.session.initialize()
        response = await self.session.list_tools()
        tools = response.tools
        print("\nConnected to server with tools:", [tool.name for tool in tools])

    async def process_query(self, query: str) -&amp;gt; str:
        """Send a user query to GPT-4-turbo and handle any tool calls in the response.

        - Sends the initial query.
        - Dynamically lists available tools from the server.
        - Handles a loop of tool call executions if GPT uses tools.
        - Returns the final response content (including tool outputs).
        """
        print("Processing a query using ChatGPT and available tools")
        messages: list[ChatCompletionMessageParam] = [
            {"role": "user", "content": query}
        ]

        # Prepare the list of tool definitions for GPT's tool-use
        response = await self.session.list_tools()
        available_tools = [
            {
                "type": "function",
                "function": {
                    "name": tool.name,
                    "description": tool.description,
                    "parameters": tool.inputSchema
                }
            }
            for tool in response.tools
        ]

        final_text = []

        # Loop until GPT returns a final message (not a tool call)
        while True:
            response = await self.openai_client.chat.completions.create(
                model="gpt-4-turbo",
                messages=messages,
                tools=available_tools,
                tool_choice="auto",
                max_tokens=1000
            )

            reply = response.choices[0].message
            messages.append(reply)

            if reply.tool_calls:
                # If GPT wants to use tools, call them via session and update the message history
                for tool_call in reply.tool_calls:
                    tool_name = tool_call.function.name
                    print("Using tool:" + tool_name)
                    tool_args = json.loads(tool_call.function.arguments)

                    result = await self.session.call_tool(tool_name, tool_args)
                    final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")

                    messages.append({
                        "role": "tool",
                        "tool_call_id": tool_call.id,
                        "content": result.content
                    })
            else:
                # Final message from GPT with no tool calls
                final_text.append(reply.content or "")
                break

        return "\n".join(final_text)

    async def chat_loop(self):
        """Runs a single interactive chat loop that summarizes Slack activity.

        Note: The channel name is currently hardcoded and must be set for this to function.
        """
        print("\nMCP Client Started!")
        print("Summarizing message for selected Channel...")
        channel_name = ""
        if channel_name == "":
            print("Channel name to summarize needed. Kindly add.")
            sys.exit(1)

        try:
            # Predefined query prompt format
            query = """Summarize today's Slack activity in %s with the following details:

                    1. **Total Message Count:** Provide the total number of messages sent across all channels and direct messages today.

                    2. **Dominant Tone:** Identify the most prevalent tone expressed in the messages. Options could include (but are not limited to): positive, negative, neutral, inquisitive, urgent, humorous, or collaborative. Briefly explain why you identified this as the dominant tone, perhaps by mentioning recurring sentiment or types of language used.

                    3. **Topic Summary by Time Grouping:** Summarize the main topics discussed throughout the day. Group these summaries chronologically. For each time block where a distinct topic or set of related topics emerged, provide a concise summary of the discussion. For example, if there was a discussion about "project alpha" around 9:00 AM and then a separate discussion about "marketing campaign updates" around 11:00 AM, these should be summarized separately under their approximate timeframes. Be sure to capture the essence of each conversation without going into excessive detail.
                    """ % channel_name

            response = await self.process_query(query)
            print("\n\033[94m" + response + "\033[0m")  # Print in blue
            return 0
        except Exception as e:
            print(f"\nError: {str(e)}")
            return 1

    async def cleanup(self):
        """Gracefully closes all resources in the async context stack."""
        await self.exit_stack.aclose()


async def run_once():
    """Single run of the client process.

    - Checks for server path.
    - Connects to the MCP server.
    - Runs the chat loop.
    - Cleans up afterwards.
    """
    client = MCPClient()
    try:
        mcp_server_path = ""
        if mcp_server_path == "":
            print("MCP server path needed. Kindly add.")
            sys.exit(1)
        await client.connect_to_server(mcp_server_path)
        await client.chat_loop()
        return 0
    finally:
        await client.cleanup()

async def main(n_minutes):
    """Main runner function that loops execution at an interval defined by `n_minutes`."""
    print("Running immediately...")
    await run_once()

    while True:
        print(f"\nWaiting {n_minutes} minute(s) before next run...\n")
        await asyncio.sleep(n_minutes * 60)
        print("Running again after delay...")
        await run_once()

if __name__ == "__main__":
    # Entry point: expects a single argument for delay in minutes
    if len(sys.argv) &amp;lt; 2:
        print("Usage: python client.py &amp;lt;delay_in_minutes&amp;gt;")
        sys.exit(1)

    try:
        delay_minutes = float(sys.argv[1])
    except ValueError:
        print("Delay must be a number (integer or float).")
        sys.exit(1)

    asyncio.run(main(delay_minutes))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Slack Bot&lt;/strong&gt;&lt;br&gt;
To complete the server side, you first need a SLACK_BOT_TOKEN, to get that, you can visit this &lt;a href="https://github.com/modelcontextprotocol/servers/tree/main/src/slack" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server&lt;/strong&gt;&lt;br&gt;
Here’s the MCP server code for Slack.&lt;/p&gt;

&lt;p&gt;This code provides implementations for listing Slack channels, sending messages, and retrieving messages. Each function is explained in the comments to help you understand what’s going on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"""Main MCP server application."""
import os
from dotenv import load_dotenv
from mcp.server.fastmcp import FastMCP

from tools import (
    list_slack_channels,
    send_slack_message,
    get_channel_messages
)
from tools.config import logger

# Load environment variables
load_dotenv()

# Get Slack token from environment
SLACK_BOT_TOKEN = os.environ.get('SLACK_BOT_TOKEN')
if not SLACK_BOT_TOKEN:
    raise ValueError("SLACK_BOT_TOKEN environment variable is required")

# Initialize FastMCP server
mcp = FastMCP("mcp_demo")


@mcp.tool()
async def slack_list_channels(limit: int = 100) -&amp;gt; str:
    """List all channels in the Slack workspace.

    Args:
        limit: Maximum number of channels to return (default 100, max 1000)
    """
    return await list_slack_channels(SLACK_BOT_TOKEN, limit)


@mcp.tool()
async def slack_send_message(channel_id: str, text: str) -&amp;gt; str:
    """Send a message to a Slack channel.

    Args:
        channel_id: The ID of the channel to send the message to
        text: The message text to send
    """
    return await send_slack_message(SLACK_BOT_TOKEN, channel_id, text)


@mcp.tool()
async def slack_get_messages(channel_id: str, limit: int = 50) -&amp;gt; str:
    """Get recent messages from a Slack channel.

    Args:
        channel_id: The ID of the channel to get messages from
        limit: Maximum number of messages to return (default 50, max 1000)
    """
    return await get_channel_messages(SLACK_BOT_TOKEN, channel_id, limit)



if __name__ == "__main__":
    # Initialize and run the server
    logger.info("Starting FastMCP server...")
    mcp.run(transport='stdio')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;End Result&lt;/strong&gt;&lt;br&gt;
Once you’ve completed the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up client.py✅&lt;/li&gt;
&lt;li&gt;Created the Slack bot and added to the channel you’d like summarized✅&lt;/li&gt;
&lt;li&gt;Built the MCP server✅&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To run the code, simply type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python client.py 600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, 600 represents the number of minutes to wait before making another call to summarize all new messages. You can adjust this value to suit your needs, as a demo, I’m currently printing to the terminal, you can decide to send this to your WhatApp, email or an &lt;a href="https://admin.ghabie.com/chat" rel="noopener noreferrer"&gt;AI Support tool&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0j4jh4ht0bq8ece0g5c.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0j4jh4ht0bq8ece0g5c.gif" alt="Image description" width="760" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>mcp</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
