Building applications powered by Large Language Models (LLMs) is exciting, but making them truly useful often means giving them access to the real world – fetching live data, calling APIs, interacting with databases, or executing code. This is where "function calling" or "tool use" comes in, allowing the LLM to request actions from your application.
While powerful, implementing this interaction layer can quickly become complex. You need to:
- Clearly define available tools for the LLM.
- Format prompts correctly.
- Parse the LLM's response to detect tool call requests.
- Extract parameters accurately.
- Execute the corresponding function in your code.
- Handle potential errors during execution.
- Format the tool's result back for the LLM.
- Manage the conversation history across user messages, assistant responses, and tool interactions.
- Handle streaming responses for a better user experience.
That's a lot of boilerplate! Wouldn't it be great if there was a lightweight, focused way to handle this in Node.js?
✨ Introducing @obayd/agentic
Meet @obayd/agentic
– a simple yet powerful framework designed specifically to streamline the creation of function-calling LLM agents in Node.js. It focuses on providing the core building blocks you need without unnecessary complexity.
What can it do for you?
- ✅ Fluent Tool Definition: Define tools the LLM can use with a clean, chainable API (
Tool.make().description().param()...
). - 📦 Toolpacks: Group related tools and let the LLM enable them on demand (
Toolpack
,enable_toolpack
). - 🌊 Streaming First: Built from the ground up for handling streaming LLM responses and tool events using async generators.
- 🔌 LLM Agnostic: Integrate with any LLM API that supports streaming responses via a simple callback function.
- 🗣️ Conversation Management: Automatically handles message history, system prompt generation (with tool docs!), tool call parsing, and result formatting.
- ⚙️ Dynamic Prompts: Define system prompt content dynamically based on runtime context.
- 🔒 Type-Safe: Includes TypeScript definitions for great DX, even in JS projects.
- ☀️ Zero-dependency.
- 🔍 Pure-javascript.
It aims to manage the interaction flow so you can focus on defining your tools and agent logic.
Let's Build an Agent! (Quick Start)
Let's create a simple weather agent that uses a custom tool.
1. Installation
npm install @obayd/agentic
2. The llmCallback (Connecting to Your LLM)
This is the core integration point. You provide an async function* that takes the formatted message history and calls your LLM's streaming API, yielding back text chunks.
// main.js
import { Conversation, Tool, fetchResponseToStream } from '@obayd/agentic';
// --- Your LLM Connection Logic ---
async function* llmCallback(messages, options) {
// --- ⚠️ Replace with your actual API details! ---
const YOUR_LLM_API_ENDPOINT = "YOUR_LLM_API_ENDPOINT"; // e.g., OpenAI, Anthropic, etc.
const YOUR_API_KEY = "YOUR_API_KEY";
const YOUR_MODEL_NAME = "YOUR_MODEL_NAME"; // e.g., gpt-4-turbo, claude-3-opus-20240229
console.log("DEBUG: Sending messages to LLM:", JSON.stringify(messages, null, 2));
try {
const response = await fetch(YOUR_LLM_API_ENDPOINT, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${YOUR_API_KEY}`,
// Add any other required headers (e.g., 'anthropic-version': '2023-06-01')
},
body: JSON.stringify({
model: YOUR_MODEL_NAME,
messages: messages, // Pass the history formatted by agentic
stream: true, // MUST request streaming
// Add other params (temperature, max_tokens, etc.)
// Note: Adapt tool specification if your API requires a separate 'tools' array
// alongside the system prompt instructions.
}),
});
// Use the helper for standard Server-Sent Events (SSE)
yield* fetchResponseToStream(response);
} catch (error) {
console.error("LLM Callback Error:", error);
yield `[Error connecting to LLM: ${error.message}]`; // Surface errors
}
}
3. Define Your Tool
Use the fluent Tool API.
// main.js (continued)
const getCurrentWeather = Tool.make("get_current_weather") // Unique name
.description("Gets the current weather for a specified location.") // Crucial for LLM
.param("location", "The city and state, e.g., San Francisco, CA", { required: true })
.param("unit", "Temperature unit", { enum: ["celsius", "fahrenheit"] }) // Optional enum
.action(async (params) => { // The function to execute
console.log(`[TOOL ACTION] Getting weather for: ${params.location}`);
// --- Your actual API call/logic here ---
await new Promise(resolve => setTimeout(resolve, 75)); // Simulate delay
const location = params.location.toLowerCase();
const unit = params.unit || "celsius";
let temp = location.includes("tokyo") ? 15 : 12;
if (unit === "fahrenheit") temp = (temp * 9/5) + 32;
// Return result for the LLM
return JSON.stringify({ temperature: temp, unit: unit, condition: "Cloudy" });
});
4. Setup the Conversation
Instantiate Conversation and define its content (prompt + tools).
// main.js (continued)
const conversation = new Conversation(llmCallback);
conversation.content([
// System prompt parts
"You are a helpful weather assistant.",
"Use the available tools to answer user questions.",
// Available tools
getCurrentWeather,
]);
5. Run the Interaction Loop
Call conversation.send() and process the event stream.
// main.js (continued)
async function runAgent(userInput) {
console.log(`\n👤 USER: ${userInput}`);
console.log("\n🤖 ASSISTANT:");
let fullResponse = "";
try {
const stream = conversation.send(userInput); // Get the async generator
// Iterate through events as they arrive
for await (const event of stream) {
switch (event.type) {
case 'assistant':
process.stdout.write(event.content); // Stream text output
fullResponse += event.content;
break;
case 'tool.calling':
// LLM decided to call a tool!
process.stdout.write(`\n[⚙️ Calling Tool: ${event.name}(${JSON.stringify(event.params)})]`);
break;
case 'tool':
// Tool finished! Result is available.
// The framework automatically sends this back to the LLM.
console.log(`\n[✅ Tool Result (${event.name})]: ${JSON.stringify(event.result)}`);
console.log("\n🤖 ASSISTANT (Processing result...):");
break;
case 'error':
console.error(`\n[❌ CONVERSATION ERROR]: ${event.content}`);
break;
// case 'tool.generating': // You can optionally handle raw input generation
// process.stdout.write('...');
// break;
}
}
console.log('\n--- Turn End ---');
return fullResponse;
} catch (error) {
console.error("\n[💥 Critical Agent Error]:", error);
}
}
// --- Start the agent! ---
runAgent("What's the weather in Tokyo like today?");
// You can continue the conversation:
// setTimeout(() => runAgent("Thanks! How about in Fahrenheit?"), 2000); // Example follow-up
Example Flow & Output:
When you run this, you'll see something like:
👤 USER: What's the weather in Tokyo like today?
🤖 ASSISTANT:
[⚙️ Calling Tool: get_current_weather({"location":"Tokyo"})]
🤖 ASSISTANT (Processing result...):
The current weather in Tokyo is 15°C and Cloudy.
--- Turn End ---
Notice how the library handled:
- Sending the prompt + tool definition.
- Parsing the LLM's request to call get_current_weather.
- Executing your action function.
- Sending the { temperature: 15, ... } result back.
- Getting the final natural language response from the LLM based on the tool result.
- All while streaming the text output!
Why Choose @obayd/agentic?
- Simplicity: Focuses on the core agent loop without excessive abstraction. Easy to learn and integrate.
- Flexibility: Bring your own LLM. The llmCallback provides a simple but powerful integration point.
- Streaming Native: Designed for responsive applications.
- Clear Tool Definition: The fluent API makes defining tools intuitive.
- Lightweight: No heavy dependencies.
Beyond the Basics
This was just a glimpse! Agentic also supports:
-
Toolpacks: Grouping tools
(Toolpack.make().add(...))
and letting the LLM enable them via the built-in enable_toolpack tool. Keeps prompts clean! -
Raw Tool Input: Tools that accept free-form text input using
.raw()
. -
Dynamic Content: Modifying the system prompt or available tools on-the-fly using async functions in
.content()
. -
Passing Arguments: Sending extra context to your tool actions via
conversation.send(message, arg1, arg2)
.
Get Started!
Ready to give your LLM applications superpowers?
-
Install:
npm install @obayd/agentic
Check the Code: github.com/obaydmerz/agentic - Read the Docs: https://agentic.gitbook.io/agentic
- Experiment!
Building powerful, interactive LLM agents just got a whole lot simpler. Give @obayd/agentic
a try for your next Node.js project and let me know what you build!
Feedback and contributions are welcome! Feel free to open an issue or PR on GitHub.
Top comments (0)