<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kintur Shah</title>
    <description>The latest articles on Forem by Kintur Shah (@kintur_kt).</description>
    <link>https://forem.com/kintur_kt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kintur_kt"/>
    <language>en</language>
    <item>
      <title>The Complete Guide: Deploying a Dockerized MCP Server to AWS ECS Fargate and connecting it to AWS AgentCore Gateway</title>
      <dc:creator>Kintur Shah</dc:creator>
      <pubDate>Mon, 26 Jan 2026 04:02:18 +0000</pubDate>
      <link>https://forem.com/kintur_kt/the-complete-guide-deploying-a-dockerized-mcp-server-to-aws-ecs-fargate-and-connecting-it-to-aws-fo7</link>
      <guid>https://forem.com/kintur_kt/the-complete-guide-deploying-a-dockerized-mcp-server-to-aws-ecs-fargate-and-connecting-it-to-aws-fo7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Model Context Protocol (MCP) is rapidly becoming the standard for connecting AI agents to external tools and data sources. But building the server is only half the battle. The real challenge is deploying it securely to the cloud and connecting it to a frontend AI platform like AgentCore without triggering a cascade of browser security errors.&lt;/p&gt;

&lt;p&gt;Recently, I went through the process of containerizing a Python-based MCP server that talks to AWS Bedrock Knowledge Bases and deploying it to AWS ECS Fargate. It was a journey filled with "504 Gateway Timeouts," "Mixed Content Blocking," and mysterious IAM failures.&lt;/p&gt;

&lt;p&gt;This post is the guide I wish I had. It details the exact architecture and the critical, minute configuration steps required to make the connection stable and secure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We aren't just running a container; we are building a secure networking chain. The specific challenge here is connecting the AgentCore Gateway - which requires a secure HTTPS endpoint - to our backend Docker container running on AWS ECS Fargate, which natively listens on insecure HTTP.&lt;br&gt;
Here is the winning flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iacfttnlmcfabehlhrb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iacfttnlmcfabehlhrb.jpg" alt=" " width="262" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 1: The MCP Server Code &amp;amp; Docker&lt;/p&gt;

&lt;p&gt;For the server implementation, I utilized the FastMCP Python library, which simplifies the creation of Model Context Protocol servers.&lt;br&gt;
Crucial Configuration- Transport Mode: When initializing the MCP server, it is critical to use the streamable-http transport mode. This mode is essential for avoiding "Mixed Content" security blocking when connecting a secure browser client (like AgentCore) to a backend container. Unlike other transport modes (like SSE) that can trigger browser security blocks if the handshake redirects to an insecure internal link, streamable-http provides a compatible, stateless endpoint that works seamlessly behind a proxy.&lt;/p&gt;

&lt;p&gt;Here is the concise code snippet for the simpleadd tool and the main execution block. You can append this to the bottom of your main.py file.&lt;br&gt;
This includes the critical transport='streamable-http' setting required for your CloudFront/ALB architecture.&lt;/p&gt;

&lt;p&gt;`# Initialize boto3 client&lt;br&gt;
client = boto3.client(&lt;br&gt;
    'bedrock-agent-runtime', &lt;br&gt;
    region_name='us-east-2'&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/mcp"&gt;@mcp&lt;/a&gt;.tool()&lt;br&gt;
def simpleadd(a: int, b: int) -&amp;gt; int:&lt;br&gt;
 """A simple tool to add two numbers. Useful for testing connectivity."""&lt;br&gt;
 return a + b&lt;/p&gt;

&lt;p&gt;def main():&lt;br&gt;
    host = os.environ.get("HOST", "0.0.0.0")&lt;br&gt;
    port = int(os.environ.get("PORT", 8080))&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(f"Starting FastMCP server on http://{host}:{port}")
print("Transport: streamable-http")

mcp.run(
    transport="streamable-http",
    host=host,
    port=port,
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    main()`&lt;/p&gt;

&lt;p&gt;Step 2: Containerization &amp;amp; Pushing to ECR&lt;/p&gt;

&lt;p&gt;Before touching the infrastructure, we need to package our code into a Docker container and upload it to AWS Elastic Container Registry (ECR).&lt;br&gt;
The Dockerfile: Create a file named Dockerfile in your project root. We keep it simple, but we must ensure we expose the port that matches our Python code (8080).&lt;/p&gt;

&lt;p&gt;`FROM python:3.12-slim&lt;/p&gt;

&lt;p&gt;WORKDIR /app&lt;/p&gt;

&lt;p&gt;COPY requirements.txt .&lt;br&gt;
RUN pip install --no-cache-dir -r requirements.txt&lt;/p&gt;

&lt;p&gt;COPY . .&lt;/p&gt;

&lt;p&gt;EXPOSE 8080&lt;br&gt;
ENV HOST=0.0.0.0&lt;br&gt;
ENV PORT=8080&lt;/p&gt;

&lt;p&gt;CMD ["python", "main.py"]`&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pushing to AWS ECR Run these commands in your terminal. Replace us-east-2 with your desired region and AWS Account ID.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A. Create the Repository:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;aws ecr create-repository --repository-name knowledge-mcp-server --region us-east-2&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;B. Authenticate Docker:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-2.amazonaws.com&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;C. Build the Image (The Crucial "Minute Detail"): If you are building this on a Mac with Apple Silicon (M1/M2/M3), Docker will default to arm64 architecture. However, AWS Fargate usually defaults to linux/amd64. If these don't match, your task will crash instantly with an obscure "Exec Format Error."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;docker build --platform linux/amd64 -t knowledge-mcp-server .&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;D. Tag and Push the image&lt;/p&gt;

&lt;h1&gt;
  
  
  Tag the image
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;docker tag knowledge-mcp-server:latest 123456789012.dkr.ecr.us-east-2.amazonaws.com/knowledge-mcp-server:latest&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Push to AWS
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;docker push 123456789012.dkr.ecr.us-east-2.amazonaws.com/knowledge-mcp-server:latest&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now your code is safely stored in ECR, ready for deployment.&lt;/p&gt;

&lt;p&gt;Step 3: The AWS Foundation (ALB &amp;amp; Networking)&lt;/p&gt;

&lt;p&gt;With the container in ECR, we need to build the networking path. We typically default to HTTPS everywhere, but because we are using CloudFront as our "SSL Wrapper" later (in Step 5), we intentionally keep the internal AWS networking simple to avoid protocol mismatches.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Application Load Balancer (ALB) Create an ALB in your public subnets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Listener: Configure the listener to use HTTP on Port 80.&lt;/p&gt;

&lt;p&gt;Why not HTTPS? We will let CloudFront handle the SSL termination. Configuring the ALB for HTTP avoids the complexity of managing internal certificates between AWS services and prevents the "504 Gateway Timeout" errors caused by CloudFront trying to speak HTTPS to a container that only speaks HTTP.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Target Group (Crucial Detail) Create a Target Group that points to your Fargate instances.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Target Type: IP Addresses (required for Fargate).&lt;/p&gt;

&lt;p&gt;Protocol: HTTP on Port 8080 (matching your container).&lt;/p&gt;

&lt;p&gt;Health Check Path: By default, AWS checks the root path /. However, because we switched our fastmcp server to streamable-http, the server listens specifically on /mcp.&lt;/p&gt;

&lt;p&gt;Action: You must change the Health Check path to /mcp.&lt;/p&gt;

&lt;p&gt;The Consequence: If you leave it as /, the health check will fail (404 Not Found), and ECS will continually kill and restart your task, leaving you with a "Zombie" service that never stabilizes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security Groups (The Chain of Trust) We need to configure two security groups to ensure traffic flows correctly without exposing the container to the open internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;ALB Security Group: Allow Inbound traffic on Port 80 from Anywhere (0.0.0.0/0). This allows CloudFront to reach the balancer.&lt;/p&gt;

&lt;p&gt;ECS Task Security Group: Allow Inbound traffic on Port 8080, but for the Source, select Custom and paste the Security Group ID of your ALB.&lt;/p&gt;

&lt;p&gt;Why? This ensures no one can bypass the load balancer to hit your container directly.&lt;/p&gt;

&lt;p&gt;Step 4: ECS Fargate &amp;amp; The "Two Roles" Trap&lt;br&gt;
Now we deploy the container. This step contains the most common pitfall in AWS ECS: confusing the Execution Role with the Task Role.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the Cluster
Go to ECS -&amp;gt; Create Cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Choose Fargate (Serverless).&lt;/p&gt;

&lt;p&gt;Name it (e.g., agent-cluster).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the Task Definition (The Blueprint) Create a new Task Definition with the following settings:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Launch Type: Fargate.&lt;/p&gt;

&lt;p&gt;OS/Architecture: Linux / X86_64.&lt;/p&gt;

&lt;p&gt;CPU/Memory: .25 vCPU / .5 GB (MCP servers are lightweight).&lt;/p&gt;

&lt;p&gt;Container Details: Image URI: Paste the ECR URI from Step 2.&lt;/p&gt;

&lt;p&gt;Port Mappings: 8080 (TCP).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IAM Roles (CRITICAL) ECS asks for two different roles. If you swap them, your code will crash with "Access Denied" or "Credentials Not Found."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Role A: Task EXECUTION Role (For AWS)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Purpose: Lets ECS pull your Docker image from ECR and push logs to CloudWatch.&lt;/p&gt;

&lt;p&gt;Permissions: AmazonECSTaskExecutionRolePolicy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role B: Task Role (For Your Code)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Purpose: This is the identity assumed by your running container. If your Python code calls boto3.client('bedrock'), it uses this role.&lt;/p&gt;

&lt;p&gt;Action: You must create a custom IAM Role (e.g., McpTaskRole) and attach the necessary permissions (e.g., bedrock:Retrieve, s3:GetObject). &lt;br&gt;
Crucially, ensure the "Trust Relationship" policy allows ecs-tasks.amazonaws.com, not just EC2.&lt;/p&gt;

&lt;p&gt;The Trap: If you leave "Task Role" as "None" or use the Execution Role, your fastmcp server will start, but every time it tries to search the Knowledge Base, it will crash.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the Service
Go to your Cluster -&amp;gt; Create Service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Launch Type: Fargate.&lt;/p&gt;

&lt;p&gt;Task Definition: Select the one you just made.&lt;/p&gt;

&lt;p&gt;Service Name: mcp-service.&lt;/p&gt;

&lt;p&gt;Network Configuration (Crucial):&lt;/p&gt;

&lt;p&gt;VPC: Select the same VPC where you built your ALB.&lt;/p&gt;

&lt;p&gt;Subnets: Select your subnets.&lt;/p&gt;

&lt;p&gt;Security Group: Select the "ECS Task Security Group" created in Step 3. (Do not let AWS create a new default group, or it will block the ALB).&lt;/p&gt;

&lt;p&gt;Auto-assign Public IP: ENABLED (Required for Fargate to pull Docker images from ECR unless you have a NAT Gateway).&lt;/p&gt;

&lt;p&gt;Load Balancing: Select "Application Load Balancer".&lt;/p&gt;

&lt;p&gt;Container to Load Balance: Select your container:8080.&lt;/p&gt;

&lt;p&gt;Target Group: Select the "Existing Target Group" created in Step 3.&lt;/p&gt;

&lt;p&gt;Step 5: The Magic Layer : AWS CloudFront&lt;/p&gt;

&lt;p&gt;This is the most critical step. If you try to connect AgentCore directly to your ALB via HTTP, the browser will block it (Secured sites cannot call unsecured APIs). If you try to set up SSL directly on the container, it becomes an operational nightmare.&lt;/p&gt;

&lt;p&gt;CloudFront acts as our smart, secure proxy that handles encryption, protocol translation, and browser security rules so your Python code doesn't have to.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the Distribution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go to CloudFront -&amp;gt; Create Distribution.&lt;br&gt;
Origin Domain: Select your ALB from the dropdown list.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Protocol Policy (Fixing the 504 Error)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Look for the Origin Protocol Policy setting.&lt;br&gt;
Action: Select HTTP Only.&lt;/p&gt;

&lt;p&gt;Why? Your ALB is listening on HTTP (Port 80). If you select "Match Viewer" or "HTTPS Only," CloudFront will try to speak HTTPS to the ALB. Since we didn't install certificates on the ALB, the connection will fail, resulting in the dreaded 504 Gateway Timeout. By selecting "HTTP Only," CloudFront handles the secure HTTPS connection with the user but speaks plain HTTP to your backend.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CORS Headers (Fixing Browser Blocks) Browsers block cross-origin requests by default. We need to tell the browser it's safe to talk to our API.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go to Default Cache Behavior.&lt;br&gt;
Viewer Protocol Policy: Redirect HTTP to HTTPS.&lt;br&gt;
Allowed HTTP Methods: Select GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. (You need POST for JSON-RPC).&lt;/p&gt;

&lt;p&gt;Response Headers Policy: Search for and select CORS-with-preflight.&lt;/p&gt;

&lt;p&gt;Why? This managed policy automatically adds the Access-Control-Allow-Origin: * header to every response. Without this, AgentCore will see a "Network Error" even if your server is working perfectly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Final Connection (AgentCore Gateway): Once the distribution is deployed (it takes a few minutes), copy your Distribution Domain Name (e.g., d12345abcdef.cloudfront.net). Now, configure your AgentCore environment:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create a Gateway:&lt;/strong&gt; In your AgentCore Gateway dashboard, create a new Gateway instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add a Target:&lt;/strong&gt; Inside that Gateway, create a new Target.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The URL:&lt;/strong&gt; Paste your CloudFront URL and append the endpoint path you defined in your Python code: &lt;strong&gt;&lt;a href="https://d12345abcdef.cloudfront.net/mcp" rel="noopener noreferrer"&gt;https://d12345abcdef.cloudfront.net/mcp&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Status: Because CloudFront is handling SSL and CORS, the connection status should turn "Ready" immediately. You now have a secure, serverless AI tool backend running on AWS!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verification: Testing with &lt;strong&gt;Postman&lt;/strong&gt; Before you even use the AI agent, you can verify the entire pipeline is working using Postman. This confirms that CloudFront, the ALB, and your Fargate task are all talking to each other.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Method: POST&lt;/p&gt;

&lt;p&gt;URL: &lt;a href="https://d12345abcdef.cloudfront.net/mcp" rel="noopener noreferrer"&gt;https://d12345abcdef.cloudfront.net/mcp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Body (Raw JSON):&lt;br&gt;
&lt;code&gt;{&lt;br&gt;
 "jsonrpc": "2.0",&lt;br&gt;
 "id": "test-1",&lt;br&gt;
 "method": "tools/call",&lt;br&gt;
 "params": {&lt;br&gt;
 "name": "simpleadd",&lt;br&gt;
 "arguments": { "a": 10, "b": 20 }&lt;br&gt;
 }&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Expected Result: You should receive a 200 OK response with the result 30. If you see this, your secure, serverless AI backend is live and ready for production!&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS #Docker #Python #DevOps #MCPDeployment #MCP #ModelContextProtocol #Fargate #ECS #AI #Agents #LLM
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>AI for Exploratory Data Analysis (EDA)</title>
      <dc:creator>Kintur Shah</dc:creator>
      <pubDate>Sun, 23 Nov 2025 21:39:36 +0000</pubDate>
      <link>https://forem.com/kintur_kt/ai-for-exploratory-data-analysis-eda-6no</link>
      <guid>https://forem.com/kintur_kt/ai-for-exploratory-data-analysis-eda-6no</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futgqbq6x928xhxkueb8o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futgqbq6x928xhxkueb8o.jpg" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What is EDA ?&lt;br&gt;
Exploratory Data Analysis (EDA) is a crucial step in the data science process, allowing analysts to understand data distributions, detect anomalies, and uncover hidden patterns before applying machine learning models. Traditionally, EDA requires domain expertise and manual effort in writing scripts, visualizing data, and identifying trends.&lt;/p&gt;

&lt;p&gt;With the rise of Artificial Intelligence (AI) and Machine Learning (ML), new tools are automating and accelerating the EDA process, making it more efficient and accessible. AI-powered EDA tools leverage Natural Language Processing (NLP), AutoML (Automated Machine Learning), and deep learning to automate data cleaning, generate insights, and create visualizations with minimal coding.&lt;/p&gt;

&lt;p&gt;In this blog, I will explore how AI is transforming EDA and highlight tools like PandasAI and AutoML that automate data insights.&lt;br&gt;
How Tools Like PandasAI and AutoML Automate Data Insights?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PandasAI: Enhancing EDA with Generative AI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlt2m42kwbje7hy10lmq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlt2m42kwbje7hy10lmq.jpg" alt=" " width="416" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PandasAI is an innovative Python library that integrates generative artificial intelligence capabilities into the widely-used Pandas library. This integration allows users to perform data analysis through natural language prompts, making EDA more accessible, especially for those without extensive programming backgrounds.&lt;/p&gt;

&lt;p&gt;Key Features of PandasAI:&lt;/p&gt;

&lt;p&gt;Conversational Data Analysis: Users can input natural language queries to analyze data, such as "Show the top 5 countries by GDP," and PandasAI interprets and executes the corresponding Pandas operations.&lt;/p&gt;

&lt;p&gt;Automated Data Cleaning: PandasAI can identify missing values, detect duplicates, and suggest corrections for cleaner datasets.&lt;/p&gt;

&lt;p&gt;Automated Visualization: The library can generate visualizations based on user prompts, facilitating a deeper understanding of data patterns and trends.&lt;br&gt;
Seamless Integration: Built on top of Pandas, it requires minimal changes to existing workflows, allowing for easy adoption.&lt;/p&gt;

&lt;p&gt;Integration with OpenAI (ChatGPT): PandasAI can leverage GPT-based models to understand and process data queries intelligently.&lt;/p&gt;

&lt;p&gt;Example Usage of PandasAI:&lt;br&gt;
&lt;code&gt;from pandasai import PandasAI&lt;br&gt;
from pandasai.llm.openai import OpenAI&lt;br&gt;
import pandas as pd&lt;br&gt;
df = pd.read_csv("sales_data.csv")&lt;br&gt;
llm = OpenAI(api_token="your_openai_api_key")&lt;br&gt;
pandas_ai = PandasAI(llm)&lt;br&gt;
response = pandas_ai.run(df, prompt="What is the total revenue for 2023?")&lt;br&gt;
print(response)&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AutoML: Automating the EDA Process&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqbdjz4ezvmsv2v80kw3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqbdjz4ezvmsv2v80kw3.jpg" alt=" " width="259" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automated Machine Learning (AutoML) refers to the process of automating the end-to-end tasks of applying machine learning to real-world problems. In the context of EDA, AutoML frameworks can automatically perform data cleaning, feature engineering, model selection, and hyperparameter tuning, thereby accelerating the data analysis pipeline.&lt;/p&gt;

&lt;p&gt;Key Features of AutoML in EDA:&lt;/p&gt;

&lt;p&gt;Automated Data Preprocessing: AutoML tools can handle missing values, detect outliers, and scale data appropriately without manual intervention.&lt;/p&gt;

&lt;p&gt;Feature Engineering and Selection: These tools can create and select the most relevant features, enhancing model performance and interpretability.&lt;/p&gt;

&lt;p&gt;Model Training and Evaluation: AutoML frameworks can train multiple models, compare their performance, and select the best-performing one based on predefined metrics.&lt;br&gt;
Visualization &amp;amp; Summary Reports: Generates detailed insights, including correlation matrices, histograms, and statistical summaries.&lt;/p&gt;

&lt;p&gt;Popular AutoML Tools for EDA:&lt;br&gt;
H2O AutoML - An open-source AutoML framework for quick data exploration.&lt;br&gt;
Google AutoML Tables - AI-driven insights for tabular data&lt;br&gt;
AutoViz - AI-powered exploratory data visualization&lt;br&gt;
MLJAR Supervised AutoML - Automated EDA reports, missing value handling, and model selection.&lt;/p&gt;

&lt;p&gt;Example usage of MLJAR AutoML for EDA&lt;br&gt;
&lt;code&gt;from supervised.automl import AutoML&lt;br&gt;
df = pd.read_csv("data.csv")&lt;br&gt;
automl = AutoML(mode="Explain")&lt;br&gt;
automl.fit(df)&lt;br&gt;
automl.report()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Benefits of AI-Driven EDA:&lt;br&gt;
✅ Faster Analysis - AI automates repetitive tasks, reducing manual effort.&lt;br&gt;
✅ Improved Accuracy - AI can detect patterns and anomalies more effectively.&lt;br&gt;
✅ Better Insights - AI-generated reports and visualizations enhance decision-making.&lt;br&gt;
✅ No-Code &amp;amp; Low-Code Solutions - AI tools make EDA accessible to non-programmers.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;AI is revolutionizing Exploratory Data Analysis (EDA) by automating data preprocessing, visualizations, and insights generation. Tools like PandasAI and AutoML make it easier to interact with datasets using natural language and automate complex analysis tasks.&lt;br&gt;
With these advancements, AI-powered EDA is becoming faster, more accurate, and more accessible, allowing data analysts, business users, and developers to extract meaningful insights with minimal effort.&lt;/p&gt;

&lt;p&gt;Want to try AI-powered EDA? Start with PandasAI or MLJAR AutoML today and transform your data analysis workflow!&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pandasai-docs.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;https://pandasai-docs.readthedocs.io/en/latest/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://medium.com/data-science-in-your-pocket/understanding-the-mljar-automl-framework-490391c04585" rel="noopener noreferrer"&gt;https://medium.com/data-science-in-your-pocket/understanding-the-mljar-automl-framework-490391c04585&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.kaggle.com/code/saurav9786/10-eda-automatic-tools" rel="noopener noreferrer"&gt;https://www.kaggle.com/code/saurav9786/10-eda-automatic-tools&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
