DEV Community

Cover image for How to use LangGraph within a FastAPI Backend 🚀
Anurag Kanojiya
Anurag Kanojiya

Posted on

17 3 3 2 2

How to use LangGraph within a FastAPI Backend 🚀

In this tutorial, I’ll walk you through how to build a backend to generate AI-crafted emails using LangGraph for structured workflows and FastAPI for API endpoints.


📌 Prerequisites

1️⃣ Install Dependencies

Create a requirements.txt file and add these dependencies:

fastapi
uvicorn
pydantic
python-dotenv
google-generativeai
langgraph
langchain
Enter fullscreen mode Exit fullscreen mode

Now, install them using the below command in terminal inside your project folder:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

📌 Setting Up the Backend

2️⃣ Create backend.py and Import Required Modules

from fastapi import FastAPI
from pydantic import BaseModel
import os
import google.generativeai as genai
import langgraph
from langgraph.graph import StateGraph
from dotenv import load_dotenv
Enter fullscreen mode Exit fullscreen mode

What Do These Modules Do?

  • FastAPI → API framework
  • Pydantic → Request validation
  • os → Load environment variables
  • google-generativeai → Gemini AI for email generation
  • langgraph → Manages AI workflows
  • dotenv → Loads API keys from a .env file

3️⃣ Load Environment Variables & Initialize FastAPI

load_dotenv()
app = FastAPI()

GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
genai.configure(api_key=GEMINI_API_KEY)
Enter fullscreen mode Exit fullscreen mode

What do these do?

  • .env helps keep API keys secure instead of hardcoding them.
  • FastAPI() initializes the backend application.

📌 Defining the API

4️⃣ Request Model (Pydantic Validation)

class EmailRequest(BaseModel):
    tone: str      
    ai_model: str  
    language: str  
    context: str  
Enter fullscreen mode Exit fullscreen mode

This ensures the API gets structured input:

  • tone → Email tone (e.g., formal, friendly).
  • ai_model → Which AI to use (gemini or mistral).
  • language → Language of the email (e.g., English, French).
  • context → The purpose/content of the email.

5️⃣ AI Email Generation Function (Gemini)

def generate_email_gemini(language: str, tone: str, context: str):
    model = genai.GenerativeModel("gemini-2.0-flash")

    prompt = f"""
    Generate an email in {language} with a {tone} tone. Context: {context}
    Return the response in this format:
    Subject: //subject
    Body: //body
    """

    response = model.generate_content(prompt)
    response_text = response.text.strip()

    subject, body = response_text.split("Body:", 1) if "Body:" in response_text else ("No Subject", response_text)
    subject = subject.replace("Subject:", "").strip()
    body = body.strip()

    return {"subject": subject, "body": body}
Enter fullscreen mode Exit fullscreen mode

How It Works?

  1. Calls Gemini AI using "gemini-2.0-flash".
  2. Constructs a prompt for email generation.
  3. Parses the AI response and extracts subject & body.

6️⃣ Structuring the Workflow with LangGraph

def generate_email_graph(ai_model: str, tone: str, language: str, context: str):
    def email_generation_fn(state):
        if ai_model == "gemini":
            email = generate_email_gemini(language, tone, context)
        else:
            email = "Invalid AI model selected!"
        return {"email": email}

    graph = StateGraph(dict)  
    graph.add_node("generate_email", email_generation_fn)
    graph.set_entry_point("generate_email")

    return graph.compile()
Enter fullscreen mode Exit fullscreen mode

Why Use LangGraph?

  • Creates structured AI workflows.
  • Makes it easy to expand functionalities (e.g., adding post-processing).
  • Can be extended with multiple AI models (Mistral, GPT, etc.).

📌 FastAPI Routes

7️⃣ Root Endpoint

@app.get("/")
def read_root():
    return {"message": "Hello, AutoComposeBackend is live!"}
Enter fullscreen mode Exit fullscreen mode

This simply confirms the server is running.


8️⃣ AI Email Generation API Endpoint

@app.post("/generate_email")
async def generate_email(request: EmailRequest):
    """Generate an AI-crafted email using Gemini."""
    graph = generate_email_graph(request.ai_model, request.tone, request.language, request.context)
    response = graph.invoke({})
    return response
Enter fullscreen mode Exit fullscreen mode

How It Works?

  1. Receives input via POST request (EmailRequest).
  2. Calls generate_email_graph to create a workflow.
  3. Executes the AI model and returns the email response.

📌 Running the Server

Save backend.py and run on terminal:

uvicorn backend:app --host 0.0.0.0 --port 8080                   
Enter fullscreen mode Exit fullscreen mode

Now, visit:

http://0.0.0.0:8080/docs to test the API using FastAPI’s built-in Swagger UI or postman.


🚀 Deploying FastAPI on Railway.app

Now that our FastAPI backend is ready, let's deploy it on Railway.app, a cloud platform for hosting backend applications effortlessly.


1️⃣ Create a Railway Account & New Project

  1. Go to Railway.app and sign up.
  2. Click on "New Project""Deploy from GitHub".
  3. Connect your GitHub repository containing the FastAPI backend.

2️⃣ Add a Procfile for Deployment

Railway uses a Procfile to define how your app runs. Create a Procfile in your project root:

web: uvicorn backend:app --host 0.0.0.0 --port $PORT
Enter fullscreen mode Exit fullscreen mode

Why?

  • uvicorn backend:app → Starts the FastAPI server.
  • --host 0.0.0.0 → Allows Railway to bind it to a public address.
  • --port $PORT → Uses the Railway-assigned port dynamically.

3️⃣ Add Environment Variables in Railway

Since we use API keys, they must be stored securely:

  1. In your Railway Project Dashboard, go to Settings → Variables.
  2. Add environment variables:
    • GEMINI_API_KEY = your_api_key_here

4️⃣ Deploy the App on Railway

  1. Click on Deploy.
  2. Wait for the deployment to complete. Once done, you'll get a public URL (e.g., https://your-app.up.railway.app).

Note: If you do not get a domain automatically then create a public domain manually from the project's networking section.


5️⃣ Test Your Deployed API

Open:

https://your-app.up.railway.app/docs

Here, you can test API endpoints using FastAPI's built-in Swagger UI or on postman.


🎯 Done! Your FastAPI Backend is Live! 🚀

You now have a FastAPI backend running on Railway.app with LangGraph-powered AI email generation.💡


Next tutorial would be about me integrating this FastAPI-LangGraph powered backend with a Jetpack Compose frontend! 🚀

A developer toolkit for building lightning-fast dashboards into SaaS apps

A developer toolkit for building lightning-fast dashboards into SaaS apps

Embed in minutes, load in milliseconds, extend infinitely. Import any chart, connect to any database, embed anywhere. Scale elegantly, monitor effortlessly, CI/CD & version control.

Get early access

Top comments (2)

Collapse
 
rohan_sharma profile image
Rohan Sharma • Edited

Very insightful!! Thanks for writing this blog

Collapse
 
anuragkanojiya profile image
Anurag Kanojiya

Thank you, Rohan. I'm glad you liked it.

Feature flag article image

Create a feature flag in your IDE in 5 minutes with LaunchDarkly’s MCP server 🏁

How to create, evaluate, and modify flags from within your IDE or AI client using natural language with LaunchDarkly's new MCP server. Follow along with this tutorial for step by step instructions.

Read full post

👋 Kindness is contagious

Explore this practical breakdown on DEV’s open platform, where developers from every background come together to push boundaries. No matter your experience, your viewpoint enriches the conversation.

Dropping a simple “thank you” or question in the comments goes a long way in supporting authors—your feedback helps ideas evolve.

At DEV, shared discovery drives progress and builds lasting bonds. If this post resonated, a quick nod of appreciation can make all the difference.

Okay