DEV Community

Cover image for 🔓 Day 4: Unlocking LangChain’s Power — Tools, Tool Calling & Messages
Utkarsh Rastogi for AWS Community Builders

Posted on • Edited on

1

🔓 Day 4: Unlocking LangChain’s Power — Tools, Tool Calling & Messages

Welcome to Day 4 of my LangChain series! Today, we’re diving into three essential building blocks that make LangChain powerful: Tools, Tool Calling, and Messages.

🔧 Tools let your AI interact with the real world—like using a calculator, fetching data, or triggering workflows.

🔗 Tool Calling enables the AI to trigger real-world actions via functions.

💬 Messages are how you and the AI communicate, structured by roles like user, assistant, and tool, and enriched with content like text or images.

By the end of this guide, you'll understand how LangChain enables dynamic AI actions and smart conversations using these concepts.


🛠️ Overview

LangChain’s tool abstraction connects a Python function with extra details—like its name, what it does, and what inputs it expects. This makes it possible for chat models to call these tools directly during a conversation.

Imagine giving your AI assistant a calculator or a weather app it can use anytime. That’s what Tools do!


🔑 Key Concepts

  • Tools wrap Python functions with a schema that the model can understand.
  • They can be passed into chat models that support tool calling.
  • Use the @tool decorator to easily turn any function into a tool.
  • LangChain can:
    • Automatically figure out the function’s name, description, and inputs.
    • Allow custom definitions.
    • Support return types like images or tables.
    • Let you hide certain arguments from the model (via injection).

⚙️ Tool Interface

LangChain defines tools using the BaseTool class. Key components include:

  • name: The tool’s identifier.
  • description: What the tool does.
  • args: The input parameters it expects (in JSON format).
  • invoke(): Runs the tool normally.
  • ainvoke(): Runs the tool asynchronously.

🤔 Why It Matters

Tools allow AI models to act beyond text, enabling smarter conversations and real-world tasks. They’re the bridge between static responses and dynamic actions.


📦 What is a Message in LangChain?

In LangChain, a message is the basic unit of communication between you and a chat model. It includes:

  • Role: Who sent the message (e.g., user, assistant)
  • Content: What was said (e.g., text, image, audio)
  • Metadata: Extra info like message ID, token usage, etc.

This structured format ensures smooth and consistent communication with any chat model provider.


🎭 Role in LangChain Messages

Each message has a role that defines its purpose in the conversation:

Role Description
system Sets the behavior or rules for the model. Not all models support this.
user Represents input from the user — prompts, questions, or commands.
assistant Represents responses from the model — answers or tool invocation requests.
tool Used to send tool results back to the model. Works with tool calling.

📝 Content in LangChain Messages

The content of a message is what’s being communicated — usually text, but sometimes multimodal data like images, audio, or video (depending on the model’s support).

🔹 Most common content type:

  • Text — supported by almost all chat models.

🔸 Emerging support:

  • Multimodal content — such as images or audio. Still limited across providers.

📌 Message Types Based on Content:

  • SystemMessage – Guides the model’s behavior.
  • HumanMessage – Represents user input.
  • AIMessage – Represents the model’s output.
  • Multimodality – Used when the content includes images, audio, or video.

💬 Conversation Structure (Made Simple)

When you talk to a chat model, your messages should follow a clear order so the model can respond the right way.

Here’s a simple example:

  • User: "Hi there!"
  • Assistant: "Hello! How can I help you today?"
  • User: "Tell me something funny."
  • Assistant: "Okay! Why don’t eggs tell jokes? Because they might crack up!"

This kind of back-and-forth helps the AI understand the conversation and reply in a helpful way.


🛠️ What is Tool Calling?

Many AI applications interact with humans using natural language. But sometimes, we want the model to interact directly with systems like APIs or databases—which require structured input (e.g., a JSON payload).

Tool Calling allows AI models to call predefined functions (tools) with the correct input format. This helps the model perform actions instead of just generating text.

Tool calling is useful when:

  • You need structured outputs (like calling an API).
  • The model must trigger a real task (like sending an email, searching a database, etc.).

✅ Prerequisites:

  • Tools with defined schemas
  • A chat model that supports tool calling

🔑 Key Concepts of Tool Calling

🧰 Tool Creation

Use the @tool decorator to turn a Python function into a tool. This creates a mapping between the function and a clear input/output schema.

🔗 Tool Binding

Bind the tool to a chat model that supports tool calling. This tells the model what tools it can use and what input format each tool expects.

🧠 Tool Calling

The model decides when to use a tool during a conversation. It formats the input correctly according to the tool’s schema.

⚡ Tool Execution

Once the model chooses a tool and provides the arguments, the function (tool) is executed using those inputs.


🔚 That’s it for Day 4!

You now understand how LangChain’s Tools, Tool Calling, and Messages work together to create powerful, intelligent AI workflows.


🙌 Credits

Special thanks to the LangChain Documentation — an amazing resource that guided the technical content in this blog.


👨‍💻 About Me

Cloud Specialist | AWS Community Builder | Sharing advanced AI & cloud concepts for real-world impact.

🔗 Connect on LinkedIn

Top comments (0)

Best Practices for Running  Container WordPress on AWS (ECS, EFS, RDS, ELB) using CDK cover image

Best Practices for Running Container WordPress on AWS (ECS, EFS, RDS, ELB) using CDK

This post discusses the process of migrating a growing WordPress eShop business to AWS using AWS CDK for an easily scalable, high availability architecture. The detailed structure encompasses several pillars: Compute, Storage, Database, Cache, CDN, DNS, Security, and Backup.

Read full post

👋 Kindness is contagious

Dive into this thoughtful piece, beloved in the supportive DEV Community. Coders of every background are invited to share and elevate our collective know-how.

A sincere "thank you" can brighten someone's day—leave your appreciation below!

On DEV, sharing knowledge smooths our journey and tightens our community bonds. Enjoyed this? A quick thank you to the author is hugely appreciated.

Okay