The Problem With a Talking-Only Agent
Imagine you hired a brilliant assistant who could only talk. They know everything — history, science, cooking, you name it. But they can’t open your email, look up a file, or check your calendar. Great conversationalist. Not much help.
That’s what a basic AI agent is without tool calling: impressive at generating text, but disconnected from the real world.
Tool calling changes that. It lets your agent actually do things — query a database, check inventory, submit a form — by calling regular Python functions you write. The agent decides which function to call, calls it, reads the result, and then responds to the user.
This post walks you through exactly how that works using the Microsoft Agent Framework (MAF).
The Think-Decide-Act Cycle (With Tools)
Before we write any code, let’s build the mental model. Every time a user sends a message, the agent goes through three stages:
User message
│
▼
THINK → The LLM reads your instructions + the user's message
+ a description of every tool available to it
DECIDE → Should I answer directly? Or call a tool first?
│
├── Direct answer → returns a response
│
└── Tool call → MAF runs the Python function
│
▼
Tool returns a result string
│
▼
THINK → LLM reads the tool result
DECIDE → Now formulate the final response
ACT → Return the response to the user
The key insight: the LLM never runs your Python code directly. It asks MAF to run it, receives the result as text, and then responds. The LLM is the brain; your tool functions are its hands.
Step 1 — Define a Tool Function
A tool in MAF is just a plain Python function. Two things make it special:
- A docstring — This is the tool’s description. The LLM reads it to understand when and why to call this function.
- Type annotations with
Fielddescriptions — These tell the LLM what each argument means.
Here’s a real example from our product catalog agent:
from typing import Annotated
from pydantic import Field
def get_product_info(
product_id: Annotated[str, Field(description="The product ID to look up, e.g. 'P1001'.")],
) -> str:
"""Look up product details such as name, price, and availability."""
catalog = {
"P1001": "Acme X200 Router — $79.99 — In stock",
"P1002": "Acme Z50 Webcam — $49.99 — In stock",
"P1003": "Acme PowerHub 6-Port — $34.99 — Out of stock",
}
return catalog.get(product_id, f"Product '{product_id}' not found in catalog.")
Notice a few things:
– The function returns a string. Always. (We’ll talk about why this matters later.)
– The Field(description=...) gives the LLM a hint about what product_id should look like — "P1001" not "router".
– The docstring describes the purpose, not the implementation. The LLM reads this to decide whether to call it.
Think of the docstring as a job posting for your function. The LLM is hiring the right tool for the job based on that description.
Step 2 — Register the Tool with the Agent
Once your function is defined, you pass it to the agent when you create it:
import asyncio
from agent_framework.openai import OpenAIChatClient
from dotenv import load_dotenv
load_dotenv()
async def main():
agent = OpenAIChatClient().as_agent(
name="ProductAgent",
instructions=(
"You are a product information assistant for Acme Electronics. "
"Use get_product_info to answer questions about products. "
"If a product is not found, tell the customer and suggest they visit acme.com."
),
tools=get_product_info, # <-- register the tool here
)
result = await agent.run("Tell me about product P1001.")
print(result)
asyncio.run(main())
The instructions field is your agent’s system prompt — it sets personality, scope, and tells the agent which tools to use and when. Notice the instructions explicitly say “Use get_product_info to answer questions about products.” This matters. If you don’t tell the agent when to use a tool, it might just guess — or not use it at all.
Step 3 — Register Multiple Tools
Most real agents need more than one tool. Pass them as a list:
def check_compatibility(
product_id: Annotated[str, Field(description="The product ID to check.")],
operating_system: Annotated[str, Field(description="The OS to check, e.g. 'Windows 11'.")],
) -> str:
"""Check whether a product is compatible with a given operating system."""
# ... lookup logic ...
return result_string
agent = OpenAIChatClient().as_agent(
name="ProductSupportAgent",
instructions=(
"You are a product support assistant. "
"Use get_product_info for product details. "
"Use check_compatibility for OS compatibility questions."
),
tools=[get_product_info, check_compatibility], # <-- list of tools
)
The LLM will intelligently pick the right tool based on the user’s question:
– “How much is the Z50 Webcam?” → calls get_product_info
– “Will the X200 Router work with macOS?” → calls check_compatibility
No if/else from you. The agent’s brain handles the routing.
The Most Common Beginner Mistake: Returning None
Here is a bug that will trip you up at least once. Look carefully:
# BROKEN — do NOT do this
def broken_lookup(order_id: Annotated[str, Field(description="Order ID")]) -> None:
"""Look up an order."""
orders = {"1001": "Shipped", "1002": "Processing"}
orders.get(order_id) # computes the result but never returns it!
The function computes the value and then throws it away. Python silently returns None.
The LLM receives None as the tool result. It has no idea what happened, so it makes something up — this is called hallucination. Your agent will confidently give the user wrong information with no error message anywhere.
The fix is simple: always return something.
# FIXED
def fixed_lookup(order_id: Annotated[str, Field(description="Order ID")]) -> str:
"""Look up an order."""
orders = {"1001": "Shipped", "1002": "Processing"}
result = orders.get(order_id)
if result is None:
return f"Order '{order_id}' was not found."
return result
Rule of thumb: Every tool function must return a str (or a typed value MAF can serialize). Never return None.
Putting It All Together: A Complete Working Example
Here’s the full runnable script from Lesson 1.4 of the course:
import asyncio
from typing import Annotated
from agent_framework.openai import OpenAIChatClient
from dotenv import load_dotenv
from pydantic import Field
load_dotenv()
def get_product_info(
product_id: Annotated[str, Field(description="The product ID to look up, e.g. 'P1001'.")],
) -> str:
"""Look up product details such as name, price, and availability."""
catalog = {
"P1001": "Acme X200 Router — $79.99 — In stock",
"P1002": "Acme Z50 Webcam — $49.99 — In stock",
"P1003": "Acme PowerHub 6-Port — $34.99 — Out of stock",
}
return catalog.get(product_id, f"Product '{product_id}' not found in catalog.")
def check_compatibility(
product_id: Annotated[str, Field(description="The product ID to check.")],
operating_system: Annotated[str, Field(description="The OS to check compatibility with.")],
) -> str:
"""Check whether a product is compatible with a given operating system."""
for (pid, os_key), result in {
("P1001", "windows 11"): "Compatible — drivers available at acme.com/drivers",
("P1001", "macos"): "Compatible — plug and play",
("P1002", "windows 11"): "Compatible — UVC standard",
("P1002", "linux"): "Partially compatible — basic features only",
}.items():
if pid == product_id.upper() and os_key in operating_system.lower():
return result
return f"Compatibility info for {product_id} on {operating_system} is not available."
async def main() -> None:
agent = OpenAIChatClient().as_agent(
name="ProductSupportAgent",
instructions=(
"You are a product support assistant for Acme Electronics. "
"Use get_product_info for product details. "
"Use check_compatibility for OS compatibility questions. "
"Be concise and helpful."
),
tools=[get_product_info, check_compatibility],
)
queries = [
"How much is the Z50 Webcam (P1002)?",
"Will the X200 Router work with macOS?",
"Is the PowerHub compatible with Windows 11?",
]
for query in queries:
print(f"User: {query}")
result = await agent.run(query)
print(f"Agent: {result}\n")
if __name__ == "__main__":
asyncio.run(main())
Run this, and you’ll see the agent correctly route each question to the right tool and respond naturally.
Leave a Reply