How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit

Photo generated by AIAgents are autonomous entities that perceive their environment and take actions to achieve specific goals. In my previous article, I explored the basics of AI agents and their capabilities. In this article, we’ll take it a step fur…


This content originally appeared on Level Up Coding - Medium and was authored by Odunayo Babatope

Photo generated by AI

Agents are autonomous entities that perceive their environment and take actions to achieve specific goals. In my previous article, I explored the basics of AI agents and their capabilities. In this article, we’ll take it a step further by diving into multi-agent systems.

Multi-agent systems are what we build when an individual agent cannot handle a complex task, we can define them as a network of agents that work together to solve a problem. There are various agent architectures, and for our implementation, we’ll focus on the Supervisor Architecture. To power our agents, we’ll use Qwen, a local large language model (LLM).

In this tutorial, we’ll build an AI Health Assistant system that comprises three agents- Fitness Agent, Dietitian Agent, and Mental Health Agent. These three agents will work together to improve the user’s fitness, nutrition, and wellness. The system will be coordinated by a Supervisor Agent, which assigns tasks to each of these agents and monitors their progress.

The Supervisor Architecture

At the center of the system is the Supervisor Agent, which plays the role of a coordinator. It receives the user’s input, identifies the appropriate agent(s) to handle each part of the task, and assigns the task to them. For instance, if a user says, “I want to get in shape and eat healthier,” the Supervisor delegates this request to both the Fitness and Dietitian agents. It then collects their responses and delivers coherent feedback to the user. In the diagram below, we can see how each agent routes back to the supervisor.

Now that we’ve discussed the concept and architecture of our Multi-Agent Supervisor System, let’s dive into the implementation. Follow the step-by-step process below.

Step 1: Installation

Before we can use Qwen locally, we need to install Ollama, which allows us to run large language models directly on our machine.

i)Download and Install Ollama

To download Ollama, go to their official website and download the version compatible with your operating system.

ii) Verify Installation

ollama -v

ii) Pull the Qwen Model

ollama pull qwen2.5:14b

Step 2: Set Up API Keys

Our system will rely on external APIs to provide real-world data for the Fitness Agent and the Dietitian Agent. These agents will fetch exercise and nutrition information from these respective APIs.

APIs Used:

i) Get Your API Keys

Sign up for free accounts on both platforms and retrieve your API keys.

ii) Store Keys

To keep our credentials secure and easily accessible, we’ll store them in a .env file.

The .env file will look like this;

EXERCISE_API_KEY =xxxxxxxx 
DIET_API_KEY =xxxxxxxxxxxx

Step 3: Create State

In building our AI Health Assistant, one of the first things we need to set up is the state. The state plays a crucial role in helping our agents keep track of conversation history as they interact and pass tasks between each other throughout the workflow.

To manage this, we’ll use LangGraph’s built-in MessagesState class. This class provides a convenient way to store and manage a list of messages. Our custom state class will inherit from MessageState to leverage its built-in functionality.

from langchain_core.messages import HumanMessage,AIMessage
from langgraph.prebuilt import create_react_agent
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.checkpoint.memory import MemorySaver
from langchain.prompts import PromptTemplate
from IPython.display import display, Image
from typing import Annotated, Literal
from langchain_ollama import ChatOllama

from typing_extensions import TypedDict
from langchain.tools import tool
from langgraph.types import Command
import requests
import random
import uuid
import os

fitness_api_key = os.getenv("EXERCISE_API_KEY")
diet_api_key = os.getenv("DIET_API_KEY")

class State(MessagesState):
next: str

Step 4: Create Custom Tools

Earlier, we obtained API keys from API-Ninjas (for exercise data) and Spoonacular (for food and nutrition data). Now it’s time to put those to use by creating custom tools for our agents. These tools are what the agent will call on to carry out their tasks.

i)Fitness Tool

We’ll use this endpoint to fetch various exercise types and generate a personalized workout routine for users. Here is what the code looks like;

class FitnessData:

def __init__(self):
self.base_url = "https://api.api-ninjas.com/v1/exercises"
self.api_key = fitness_api_key


def get_muscle_groups_and_types(self):

muscle_targets = {
'full_body': ["abdominals", "biceps", "calves", "chest", "forearms", "glutes",
"hamstrings", "lower_back", "middle_back", "quadriceps",
"traps", "triceps", "adductors"
],
'upper_body': ["biceps", "chest", "forearms", "lats", "lower_back", "middle_back", "neck", "traps", "triceps" ],
'lower_body': ["adductors", "calves", "glutes", "hamstrings", "quadriceps"]
}
exercise_types = {'types':["powerlifting","strength", "stretching", "strongman"]}

return muscle_targets, exercise_types


def fetch_exercises(self, type, muscle, difficulty):
headers = {
'X-Api-Key':self.api_key
}
params= {
'type': type,
'muscle': muscle,
'difficulty': difficulty
}
try:
response = requests.get(self.base_url, headers=headers,params=params)
result = response.json()
if not result:
print(f"No exercises found for {muscle}")
return result
except requests.RequestException as e:
print(f"Request failed: {e}")
return []

def generate_workout_plan(self, query='full_body', difficulty='intermediate'):
output=[]
muscle_targets, exercise_types = self.get_muscle_groups_and_types()
muscle = random.choice(muscle_targets.get(query))
type = random.choice(exercise_types.get('types'))
result = self.fetch_exercises('stretching', muscle, difficulty)
print(result)
limit_plan = result[:3]
for i, data in enumerate(limit_plan):
if data not in output:
output.append(f"Exercise {i+1}: {data['name']}")
output.append(f"Muscle: {data['muscle']}")
output.append(f"Instructions: {data['instructions']}")

return output

After that, we create the fitness custom tool by creating an instance of the class to call the generate_workout_plan function. This function allows users to request workout plans based on specific categories such as full_body, upper_body, or lower_body. You’ll notice the @tool decorator applied to the function; this is what converts the function to a LangChain custom tool.

@tool
def fitness_data_tool(query: Annotated[str, "This input will either be full_body, upper_body \
or lower_body exercise plan"]):
"""use this tool to get fitness or workout plan for a user.
The workout name provided serves as your input \
"""
fitness_tool = FitnessData()
result = fitness_tool.generate_workout_plan(query)

return result

ii)Dietitian Tool

For the dietitian agent’s data source, we’ll leverage the Spoonacular API by making use of its Generate Meal Plan and Get Recipe Information endpoints. With this, the agent can generate customized meal plans based on users’ dietary preferences, such as vegetarian, vegan, or standard diet. From the result, the users will be able to see meal plans along with a daily nutritional breakdown, such as protein, fats, and carbs.

class Dietitian:

def __init__(self):
self.base_url = "https://api.spoonacular.com"
self.api_key = diet_api_key

def fetch_meal(self, time_frame="day", diet="None"):

url = f"{self.base_url}/mealplanner/generate"
params = {
"timeFrame":time_frame,
"diet": diet,
"apiKey":self.api_key
}

response = requests.get(url, params=params)
if not response:
print('Meal Plan not found')
return response.json()

def get_recipe_information(self, recipe_id):

url = f"{self.base_url}/recipes/{recipe_id}/information"
params = {"apiKey": self.api_key}
response = requests.get(url, params=params)
if not response:
print("Recipe not found")
return response.json()


def generate_meal_plan(self, query):
meals_processed = []
meal_plan = self.fetch_meal(query)
print(meal_plan)

meals = meal_plan.get('meals')
nutrients = meal_plan.get('nutrients')

for i, meal in enumerate(meals):
recipe_info = self.get_recipe_information(meal.get('id'))
ingredients = [ingredient['original'] for ingredient in recipe_info.get('extendedIngredients')]

meals_processed.append(f"🍽️ Meal {i+1}: {meal.get('title')}")
meals_processed.append(f"Prep Time: {meal.get('readyInMinutes')}")
meals_processed.append(f"Servings: {meal.get('servings')}")


meals_processed.append("📝 Ingredients:\n" + "\n".join(ingredients))
meals_processed.append(f"📋 Instructions:\n {recipe_info.get('instructions')}")


meals_processed.append(
"\n Daily Nutrients:\n"
f"Protein: {nutrients.get('protein', 'N/A')} kcal\n"
f"Fat: {nutrients.get('fat', 'N/A')} g\n"
f"Carbohydrates: {nutrients.get('carbohydrates', 'N/A')} g"
)


return meals_processed

Next, we create our custom tool below;

@tool
def diet_tool(query: Annotated[str, "This input will either be None, vegetarian, and vegan"]):
"""use this tool to get diet plan for the user.
The diet type provided serves as your input \
"""
dietitian_tool = Dietitian()
result = dietitian_tool.generate_meal_plan(query)

return result

Step 5: Define LLM

Here, we’ll define our large language model, which is the Qwen2.5:14b model. This model is well-suited for building intelligent agents.

llm = ChatOllama(model="qwen2.5:14b")
memory = MemorySaver()

Step 6: Creating the Agent & Nodes

In this step, we will create our nodes and agents, where we make use of the prebuilt create_react_agent in LangGraph.

i) Fitness Agent

For the fitness agent, we pass three key components: the LLM, fitness_data_tool (our custom tool for fetching exercise data) fitness_agent_prompt into the create_react_agent function

Next, in our Fitness Node, which represents the fitness task within the LangGraph workflow. We invoke the agent by passing in the current message from the state (the user’s input). Once the agent processes the input and generates a response, we pass the result using the Command Object. This helps us update the state with the output and instruct the Fitness node to return to the Supervisor Agent once the task is completed.

fitness_agent_prompt = """
You can only answer queries related to workout.
"""


fitness_agent = create_react_agent(
llm,
tools = [fitness_data_tool],
prompt = fitness_agent_prompt)


def fitness_node(state: State) -> Command[Literal["supervisor"]]:
result = fitness_agent.invoke(state)
return Command(
update={
"messages": [
AIMessage(content=result["messages"][-1].content, name="fitness")
]
},
goto="supervisor",
)

ii) Dietitian Agent
In creating our dietitian agent and node, we repeat the same process.

dietitian_system_prompt = """
You can only answer queries related to diet and meal plans. .
"""
dietitian_agent = create_react_agent(
llm,
tools = [diet_tool],
prompt = dietitian_system_prompt)


def dietitian_node(state: State) -> Command[Literal["supervisor"]]:
result = dietitian_agent.invoke(state)
return Command(
update={
"messages": [
AIMessage(content=result["messages"][-1].content, name="dietitian")
]
},
goto="supervisor",
)

iii) Mental Health Agent

To create our mental health agent, we defined a mental_health_node that includes a custom prompt to guide the large language model on what to do and what we’re expecting. After the task is completed, the node uses the Command object to update the conversation state and then routes control back to the Supervisor Agent.

def mental_health_node(state: State)-> Command[Literal["supervisor"]]:
prompt = PromptTemplate.from_template(
"""You are a supportive mental wellness coach.
Your task is to:
- Give a unique mental wellness tip or stress-reducing practice.
- Make it simple, kind, and useful. Avoid repeating tips."""
)

chain = prompt | llm
response = chain.invoke(state)
return Command(
update={
"messages": [
AIMessage(content=f"Here's your wellness tip: {response.content}", name="wellness")
]
},
goto="supervisor",
)

iv) Supervisor Agent

In creating the Supervisor Agent, we define a system prompt where we instruct the agent on its role and introduce the team it will manage, the Fitness Agent, Dietitian Agent, and Mental Health Agent. We also defined a Router class, which serves as a structured template for the supervisor’s output.

Then, we implemented the supervisor node, where we set up the message flow and defined the logic for routing between agents. This includes determining how it will route to the next task and when to conclude the conversation.

members = ["fitness", "dietitian", "wellness"]
options = members + ["FINISH"]



system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
f" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."


"Guidelines:\n"
"1. Always check the last message in the conversation to determine if the task has been completed.\n"
"2. If you already have the final answer or outcome, return 'FINISH'.\n"

)

class Router(TypedDict):
"""Worker to route to next. If no workers needed, route to FINISH."""

next: Literal[*options]

def supervisor_node(state: State)-> Command[Literal[*members, "__end__"]]:
messages = [
{"role": "system", "content": system_prompt},
] + state["messages"]
response = llm.with_structured_output(Router).invoke(messages)
goto = response["next"]
if goto == "FINISH":
goto = END

return Command(goto=goto, update={"next": goto})

Step 7: Build Multi-Agent Graph

Now, we build the workflow graph, where we add the supervisor node as the starting point of the execution. After that, we add the remaining agent nodes.


builder = StateGraph(State)
builder.add_edge(START, "supervisor")
builder.add_node("supervisor", supervisor_node)
builder.add_node("fitness", fitness_node)
builder.add_node("dietitian", dietitian_node)
builder.add_node("wellness", mental_health_node)
graph = builder.compile(checkpointer=memory)

Step 8: Test the Multi-Agent System

At this stage, our multi-agent system is fully set up and ready to receive user input. Before sending the input, let’s first define a helper function to extract the agents’ output

def parse_langgraph_output(stream):
results = []
for key, value in stream.items():
if key == "supervisor":
continue
messages = value.get("messages", [])
for msg in messages:
if isinstance(msg, str):
results.append((key, msg))
elif isinstance(msg, AIMessage):
results.append((key, msg.content))
return results

Then, we pass the user’s input into the system.


# Get the final step in the stream
final_event = None
config = {"configurable": {"thread_id": "1", "recursion_limit": 10}}
inputs = {
"messages": [
HumanMessage(
content="Give me wellness tips for the month?"
)
],
}


for step in graph.stream(inputs, config=config):
final_event = step # Keep updating to the latest step
print(final_event)

response_message=parse_langgraph_output(final_event)
for agent, content in response_message:
print(f"**Agent :** `{agent}`\n\n{content}")
print("="*50)

Here’s what the result looks like in the Streamlit app.

Check this GitHub repository for the full code.

Thanks for reading! See you in the next one.


How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Odunayo Babatope


Print Share Comment Cite Upload Translate Updates
APA

Odunayo Babatope | Sciencx (2025-05-02T00:44:27+00:00) How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit. Retrieved from https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/

MLA
" » How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit." Odunayo Babatope | Sciencx - Friday May 2, 2025, https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/
HARVARD
Odunayo Babatope | Sciencx Friday May 2, 2025 » How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit., viewed ,<https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/>
VANCOUVER
Odunayo Babatope | Sciencx - » How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/
CHICAGO
" » How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit." Odunayo Babatope | Sciencx - Accessed . https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/
IEEE
" » How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit." Odunayo Babatope | Sciencx [Online]. Available: https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/. [Accessed: ]
rf:citation
» How to Build a Multi-Agent Supervisor System with LangGraph, Qwen & Streamlit | Odunayo Babatope | Sciencx | https://www.scien.cx/2025/05/02/how-to-build-a-multi-agent-supervisor-system-with-langgraph-qwen-streamlit/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.