An AI agent is a program that wraps a language model (like GPT-4) with a specific role and set of instructions. Instead of just calling the model and getting text back, you give it a job description:
“You are a blogger. When someone gives you a topic, write a blog post.”
That’s it. The instructions are what turn a generic language model into a focused collaborator.
The Big Picture: What We’re Building
We’ll build a two-agent pipeline:
User Prompt
│
▼
┌─────────┐
│ Writer │ ← writes a blog post draft
└─────────┘
│
▼
┌──────────┐
│ Reviewer │ ← gives concise feedback on the draft
└──────────┘
│
▼
Final Response (draft + feedback shown together)
This is called a Sequential Workflow: agents run one after the other, each seeing the output of the previous step.
Finally, we’ll launch a DevUI — a lightweight browser interface that lets you chat with this pipeline without building any frontend code yourself.
The reviewer/critic/refine loop has become a common pattern in AI systems. Having one LLM generate content and another evaluate it can improve output quality significantly. The Microsoft Agent Framework makes this composable with very little code.
The DevUI removes the friction of building a frontend just to test your ideas. You can focus on designing agent roles and workflows, then swap in a real interface later when you’re ready to ship.
Prerequisites
- Python 3.10+
agent_frameworkandpython-dotenvinstalled- An OpenAI API key in a
.envfile:
OPENAI_API_KEY=sk-...
The Full Code
from agent_framework import Workflow
from agent_framework.openai import OpenAIChatClient
from agent_framework.devui import serve
from agent_framework.orchestrations import SequentialBuilder
from dotenv import load_dotenv
# Load environment variables (your API key lives here)
load_dotenv()
def main() -> None:
# 1) Create a chat client connected to OpenAI
chat_client = OpenAIChatClient()
# 2) Define two specialized agents
writer = chat_client.as_agent(
instructions="You are an AI blogger. Provide a blog post based on the prompt.",
name="writer",
)
reviewer = chat_client.as_agent(
instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
name="reviewer",
)
# 3) Chain them into a sequential pipeline
workflow = SequentialBuilder(participants=[writer, reviewer]).build()
# 4) Wrap the workflow as a single agent
agent = workflow.as_agent(name="SequentialWorkflowAgent")
# 5) Launch the DevUI in your browser
serve(entities=[agent], auto_open=True)
if __name__ == "__main__":
main()
Step-by-Step Breakdown
Step 1 — Load Your API Key
from dotenv import load_dotenv
load_dotenv()
python-dotenv reads your .env file and injects the values into the environment. The OpenAIChatClient will automatically pick up OPENAI_API_KEY from there. This keeps secrets out of your source code.
Step 2 — Create the Chat Client
chat_client = OpenAIChatClient()
This is the connection to OpenAI’s API. Think of it as a factory that can manufacture agents. On its own it doesn’t do anything — you need to create agents from it.
Step 3 — Define Your Agents
writer = chat_client.as_agent(
instructions="You are an AI blogger. Provide a blog post based on the prompt.",
name="writer",
)
reviewer = chat_client.as_agent(
instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
name="reviewer",
)
Each call to .as_agent() creates an independent agent with its own personality and job description:
| Parameter | Purpose |
|---|---|
instructions |
The system prompt — this defines the agent’s role |
name |
A human-readable label used in logs and the DevUI |
Key insight: The reviewer instructions say “give feedback on the previous assistant message” — this is how it knows to review the writer’s output rather than writing its own post from scratch. In a sequential pipeline, each agent sees all prior messages, so the reviewer naturally sees what the writer produced.
Step 4 — Build a Sequential Workflow
workflow = SequentialBuilder(participants=[writer, reviewer]).build()
SequentialBuilder chains agents so they execute in order. The list [writer, reviewer] determines the execution sequence. The output of each agent becomes part of the conversation history for the next.
Think of it like an assembly line:
– Station 1 (writer): receives your prompt, produces a draft
– Station 2 (reviewer): receives everything so far, adds feedback
Step 5 — Wrap the Workflow as a Single Agent
agent = workflow.as_agent(name="SequentialWorkflowAgent")
This is an important abstraction. The workflow object is converted into a single unified agent. From the outside, it behaves like any other agent — you send it a message, it returns a response. Internally, it orchestrates the multi-step pipeline.
This matters because the serve() function (next step) works with agents. By wrapping the workflow, we can hand it off to any part of the framework that expects a single agent.
Step 6 — Launch the DevUI
serve(entities=[agent], auto_open=True)
This is where the magic happens for development. serve() starts a local web server and opens a browser tab automatically (because auto_open=True). You get:
- A chat interface to send prompts to your agent
- Visibility into which agent is responding at each step
- A fast feedback loop while you iterate on instructions
The entities parameter accepts a list, so you could expose multiple agents or workflows in a single UI session.
Running It
python simpleDemos/devui_demo.py
Your browser should open automatically. Try a prompt like:
“Write a short blog post about why Python is great for beginners.”
You’ll see the writer produce a draft, followed by the reviewer’s critique — all in one response.

What to Experiment With
Once you have it running, try these tweaks to deepen your understanding:
- Change the instructions — Make the reviewer harsher or more encouraging. Notice how tone shifts.
- Add a third agent — Insert an
editoragent after the reviewer that rewrites the draft incorporating the feedback. - Swap the order — Put
reviewerfirst. What happens when it reviews a prompt that hasn’t been written yet? - Set
auto_open=False— Note the URL printed in the terminal and open it manually.
Key Concepts Recap
| Concept | What It Does |
|---|---|
OpenAIChatClient |
Connects to OpenAI; factory for creating agents |
.as_agent() |
Wraps a client or workflow into a named, instruction-driven agent |
SequentialBuilder |
Chains agents to run in order, passing conversation history along |
workflow.as_agent() |
Packages the whole pipeline behind a single agent interface |
serve() |
Launches a browser-based DevUI for interactive testing |
Happy building!
Found a related video from Samuel Smith. Hope this helps too!
Leave a Reply