Skip to main content

Exploring 2 Multi-Agent LLM Libraries: Camel & Langroid

In this post, we'll explore two multi-agent LLM libraries - Camel and Langroid!
Created on September 4|Last edited on March 1
Imagine a world where AI agents can collaborate, brainstorm, and tackle complex tasks together. This is the promise of multi-agent LLM systems, and two libraries are leading the charge: Camel and Langroid. In this post, we'll dive into these libraries, exploring their strengths and how they can unlock the potential of multi-agent AI.
Here's what we'll be covering:


Let's get started.

What is a multi-agent LLM system?

Think of an LLM (Large Language Model) as a powerful language processor. Now, imagine multiple LLMs working together, sharing information and insights like a team of experts. That's the essence of a multi-agent system.
In this article we'll be working with two such systems:
Camel (Colab): A role-playing framework that lets you build multi-agent interactions. You define the roles (AI assistant and user) and the task, and Camel takes care of the conversation flow and prompting. Imagine brainstorming with an AI co-pilot, or having your AI assistant act as a team member on a project. Camel makes it possible.
Langroid (Colab): A lightweight library for building multi-agent LLM applications. It focuses on flexibility and ease of use. Define your agents, their tasks, and how they interact, and Langroid handles the communication and execution. This could be anything from building a conversational chatbot team to creating an AI-powered decision-making system.
You can think of an agent as a wrapper around the LLM. An agent is like the agent in reinforcement learning: it decides what actions to take. In this case, the agent's brain is the LLM. A multi-agent system is just an ensemble or a collection of multiple agents.
A popular example is MetaGPT, a multi-agent system that "takes a one-line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents"!
I've looked a bit into the open-source world for multi-agent libraries and, of those, I'd like to highlight two:

Camel

Camel is "a novel communicative agent" framework. In other words, it's a multi-agent framework. It is both a research paper and a Python library! Before we try out the library, let's get a brief overview of the paper.

So what is Camel? Well, technically, it's the name of their repository and paper, but the actual framework they introduce is called the Role-Playing framework. Here's how it works:
  1. You have an idea, and you type it into, let's say for now, Camel
  2. You define the roles, one for the AI assistant and one for the AI user (these roles will be explained later)
  3. The idea is passed to the task specifier, which adds, or specifies a whole lot more about your task
  4. This specified/detailed task is then handed off to the user and assistant
  5. The AI user is in charge of providing the necessary instructions and timelining to reach the specified task objective eventually
  6. The AI assistant is in charge of implementing these things
In a way, it's like a team leader and a team member. Only in this case, it's a team of 2. So, it is multi-agent, but there are only two agents involved. Below is an example set of prompts they use for their AI Society case.


On an unrelated note, why does it need to be task-specific? How does the prompt look for another case? Ah, after looking into the prompts folder on their repository, I realized they have different sets of prompts mainly because they want to showcase other scenarios where the inception prompting is not task-oriented.

Great! Now, let's try out their library.

Installing Camel

pip install git+https://github.com/camel-ai/camel.git@v0.1.0
🛑: Make sure you're using Python 3.9+!

Experimenting With Camel

As of writing this, Camel doesn't have a great set of examples on their documentation page. All their examples are in their repository in this folder.
Structure of camel library.
They have an agents submodule and prompts submodule, though all of the examples for role-playing primarily use the societies submodule. Apart from the examples provided, how the agents and prompts mesh with the rest of the package is not immediately clear.
Below are the following folders of examples they provide.

Let's poke around one of the ai_society examples!
# https://github.com/camel-ai/camel/tree/7f0315f91a6891d6d7bcf78be0a7815f6a49f96c/examples
import camel

from colorama import Fore

# Note, in the example ai_society/role_playing/ in the link above,
# this was:
# `from camel.societies import RolePlaying`.
from camel.agents import RolePlaying
from camel.utils import print_text_animated
We first import our libraries. colorama is for colored text output. We will use RolePlaying, which simulates a role-playing session and abstracts away the conversation and internals. print_text_animated streams the output. Then, we insert our OpenAI API key.
import openai

openai.api_key = "" INSERT YOUR OPENAI API KEY!
Next, we assign our role names to the user and assistant. The model_type is None. with_task_specify means a task specifier will be used. The last component is the task prompt or idea.
task_prompt = "Develop a trading bot for the stock market"
role_play_session = RolePlaying(
assistant_role_name="Python Programmer",
user_role_name="Stock Trader",
task_prompt=task_prompt,
with_task_specify=True,
)
You can find more about the RolePlaying class here.
print(
Fore.GREEN +
f"AI Assistant sys message:\n{role_play_session.assistant_sys_msg}\n")
print(Fore.BLUE +
f"AI User sys message:\n{role_play_session.user_sys_msg}\n")

print(Fore.YELLOW + f"Original task prompt:\n{task_prompt}\n")
print(
Fore.CYAN +
f"Specified task prompt:\n{role_play_session.specified_task_prompt}\n")
print(Fore.RED + f"Final task prompt:\n{role_play_session.task_prompt}\n")
Afterwards, we use colorama to print out the AI Assistant and AI User system messages and task prompts.
The output should look something like this:

Finally, we run the simulation. This involves some work on our end, but this is made very easy because of the examples!
chat_turn_limit, n = 10, 0 # Up the chat_turn_limit for longer conversations between the agents. Note: this gets expensive!
input_assistant_msg, _ = role_play_session.init_chat()
while n < chat_turn_limit:
n += 1
assistant_response, user_response = role_play_session.step(
input_assistant_msg)

if assistant_response[1]: # Tuple of length 3. The example in the repo is outdated. No worries.
print(Fore.GREEN +
("AI Assistant terminated. Reason: "
f"{assistant_response.info['termination_reasons']}."))
break
if user_response[1]: # Tuple of length 3. Same deal as assistant_response.
print(Fore.GREEN +
("AI User terminated. "
f"Reason: {user_response.info['termination_reasons']}."))
break

print_text_animated(Fore.BLUE +
f"AI User:\n\n{user_response[0].content}\n") # A small edit here from the original example to get it working.
print_text_animated(Fore.GREEN + "AI Assistant:\n\n"
f"{assistant_response[0].content}\n") # Same here.

if "CAMEL_TASK_DONE" in user_response[0].content: # And here.
break

input_assistant_msg = assistant_response[0]
This is a lot of code, but before we run it. I'll walk through it and show that it's not that bad!
We have a counter n that keeps counting every step of the conversation (a step is the AI user instructing the assistant and the assistant responding). Once it reaches the turn limit chat_turn_limit, then it stops! That or "CAMEL_TASK_DONE", a special token in the inception prompting, is outputted by the assistant. We have 2 if statements to check if either party unexpectedly terminated and a bunch of print statements and formatting (plus streaming).
Besides the basic code for looping and terminating the multi-agent conversation, the main method is role_play_session.step(). That's it! Below is a snapshot of a bit of the output (it gets pretty long). This single run cost about $0.05. Bear in mind it will cost quite a bit more if the turn limit is higher and the task is more complex.

But that's it! There are a ton more examples in their examples folder that I encourage you to check out! This is the gist of Camel. It's a lightweight, high-level compact library to get a multi-agent (2 agents specifically) up and running in only 30ish lines of code.
There are the submodules I mentioned earlier, but the bulk of what you might use is in the examples.

Langroid

Time for Langroid! First, we'll cover a little bit about Langroid. It's a lightweight, intuitive library for easily building multi-agent LLM apps. They also support vector stores (Qdrant and Chroma as of this writing) and a set of tools for function calling. For reference, the breakdown of Langroid's submodules are below.

At its core, you're essentially setting up an agent on a particular task and executing that task. Here's how a barebones version of this looks.

Check out their README. They have some great starter examples, and their documentation also has a good "Getting Started" section (and check out their blog).
Interestingly, they are not made with LangChain or LlamaIndex.

Installing Langroid

pip install langroid
Installation for Langroid is much easier!

Experimenting

For brevity in this blog post, I'll cover just a simple demo from their page. I'll be covering the "Three communicating agents" example.
A toy numbers game, where when given a number n: repeater_agent's LLM simply returns n, even_agent's LLM returns n/2 if n is even, else says "DO-NOT-KNOW" odd_agent's LLM returns 3*n+1 if n is odd, else says "DO-NOT-KNOW"
The first thing we do is, of course, set our OpenAI API key.
# Langroid checks for a OPENAI_API_KEY in your .env folder.

openai_api_key = "" # INSERT YOUR OPENAI_API_KEY HERE!

with open(".env", "w") as f:
f.write("OPENAI_API_KEY="+openai_api_key)
Langroid checks for the .env folder, so that's why you see it structured this way instead.
Next, we import our relevant modules.
import langroid

from langroid.utils.constants import NO_ANSWER
from langroid.agent.chat_agent import ChatAgent, ChatAgentConfig
from langroid.agent.task import Task
from langroid.language_models.openai_gpt import OpenAIChatModel, OpenAIGPTConfig
Okay, a lot is going on here. NO_ANSWER is just a constant string placeholder. It's convenient for your prompts as you will soon see. For more on constants, check out this folder in the repo. For almost all of your uses in Langroid, you will want to important ChatAgent and ChatAgentConfig .
Essentially, you're defining a chat agent with a config. The arguments you pass into this config are from langroid.language_models.openai_gpt. Specifically, you're passing in OpenAIChatModel and OpenAIGPTConfig.
config = ChatAgentConfig(
llm = OpenAIGPTConfig(
chat_model=OpenAIChatModel.GPT4,
),
vecdb = None,
)
Then we define our 3 agents.
repeater_agent = ChatAgent(config)
repeater_task = Task(
repeater_agent,
name = "Repeater",
system_message="""
Your job is to repeat whatever number you receive.
""",
llm_delegate=True, # LLM takes charge of task
single_round=False,
)

even_agent = ChatAgent(config)
even_task = Task(
even_agent,
name = "EvenHandler",
system_message=f"""
You will be given a number.
If it is even, divide by 2 and say the result, nothing else.
If it is odd, say {NO_ANSWER}
""",
single_round=True, # task done after 1 step() with valid response
)

odd_agent = ChatAgent(config)
odd_task = Task(
odd_agent,
name = "OddHandler",
system_message=f"""
You will be given a number n.
If it is odd, return (n*3+1), say nothing else.
If it is even, say {NO_ANSWER}
""",
single_round=True, # task done after 1 step() with valid response
)
Notice how, regardless of what role the agent has, it's always defined in the same format: ChatAgent with a specified config (in this case, we use the same config for all agents). Then we define a task (and assign the agent to that task) pass in a system message, and assign a name.
Running this task is as simple as:
repeater_task.add_sub_task([even_task, odd_task])
repeater_task.run("3")
To have these agents communicate, we assign the 2 tasks to be a subtask of the repeater_task. That's it! You just learned how to set up a multi-agent system in Langroid in just a couple of lines of code.
Bear in mind that Langroid is still a work-in-progress, and the library is expanding. Currently, this lightweight framework is a great tool for setting up a multi-agent system in a few lines of code. It provides more flexibility than Camel (as the Camel library is specific to their paper). Langroid also provides support for chatting with documents and tabular data.

Conclusion

In this post, I showcased Langroid and Camel, 2 lightweight, high-level multi-agent libraries. Both of these libraries have much more customization under the hood, and I encourage you to delve further into them! Happy experimenting and thanks for reading! 👋

References

Iterate on AI agents and models faster. Try Weights & Biases today.