Paper | Key Features | Installation | Quick Start | Examples | Tutorials | Benchmark
AgentLite is a research-oriented library designed for building and advancing LLM-based task-oriented agent systems. It simplifies the implementation of new agent/multi-agent architectures, enabling easy orchestration of multiple agents through a manager agent. Whether you're building individual agents or complex multi-agent systems, AgentLite provides a straightforward and lightweight foundation for your research and development. Check more details in our paper.
- [03.2024] xLAM is released! Try it with AgentLite benchmark, which is comparable to GPT-4!
- [03.2024] We developed all the agent architectures in BOLAA with AgentLite. Check our new benchmark
- [02.2024] Initial Release of AgentLite library and paper!
- Lightweight Codebase: Designed for easy implementation of new Agent/Multi-Agent architectures.
- Task-oriented LLM-based Agents: Focus on building agents for specific tasks, enhancing their performance and capabilities.
- Research-oriented Design: A perfect tool for exploring advanced concepts in LLM-based multi-agent systems.
To get started with AgentLite, clone the repository and install the package using the following commands:
git clone https://github.com/SalesforceAIResearch/AgentLite.git
cd AgentLite
pip install -e .
Ensure you check the package dependencies and requirements in requirements.txt
and setup.py
.
To use AgentLite, set your OpenAI API key and run one of the example scripts:
export OPENAI_API_KEY=<INSERT YOUR OpenAI API KEY HERE>
python ./example/SearchManager.py
Build a Wikipedia search agent by providing a specific search action. For the full source, see SearchAgent.py.
1. Define the Action of an Agent
from agentlite.actions.BaseAction import BaseAction
from langchain_community.tools import WikipediaQueryRun
class WikipediaSearch(BaseAction):
def __init__(self) -> None:
action_name = "Wikipedia_Search"
action_desc = "Using this API to search Wiki content." # LLM uses action_name and action_desc to understand this action
params_doc = {"query": "the search string. be simple."} # LLM uses this params_doc to understand the parameters in self.__call__() function
self.search = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
super().__init__(
action_name=action_name,
action_desc=action_desc,
params_doc=params_doc,
)
def __call__(self, query):
return self.search.run(query)
2. Define an Agent with the Search Action
# get the llm for agent. Should already export OPENAI_API_KEY="" in the your terminal if you use OPENAI_API.
llm_config_dict = {"llm_name": "gpt-3.5-turbo", "temperature": 0.9}
llm_config = LLMConfig(llm_config_dict)
llm = get_llm_backend(llm_config)
# define an individual agent
search_agent_info = {
"name": "search_agent",
"role": "you can search wikipedia to get the information."
}
search_agent = BaseAgent(name=search_agent_info["name"],
role=search_agent_info["role"],
llm=llm,
actions=[WikipediaSearch()],
logger=agent_logger
)
3. Calling the Agent with a Task
# calling the agent with TaskPackage
from agentlite.commons import TaskPackage
test_task = "what is the found date of microsoft"
test_task_pack = TaskPackage(instruction=test_task)
response = search_agent(test_task_pack)
print("response:", response)
Orchestrate different search agents into a multi-agent system. For full source, see simple_manager.py.
1. Define Individual Agents
# define two different types search agents
## get llm backend
from agentlite.llm.agent_llms import get_llm_backend
from agentlite.llm.LLMConfig import LLMConfig
llm_config_dict = {
"llm_name": "gpt-3.5-turbo",
"temperature": 0.9,
"context_len": 4000,
}
llm_config = LLMConfig(llm_config_dict)
llm = get_llm_backend(llm_config)
## get individual agents
from example.SearchAgent import WikiSearchAgent, DuckSearchAgent
wiki_search_agent = WikiSearchAgent(llm)
duck_search_agent = DuckSearchAgent(llm)
2. Define a Manager Agent
from agentlite.agents import ManagerAgent
manager_agent_info = {
"name": "search_manager",
"role": "you are controlling wiki_search_agent and duck_search_agent to complete the search task. You should first use wiki_search_agent to complete the search task. If didn't answer the task, please try to ask duck_search_agent. You should integrate the answer from both agent to finalize the task."
}
# simply initializing the manager with info and the TeamAgents.
search_manager = ManagerAgent(llm, manager_agent_info["name"],
manager_agent_info["role"],
TeamAgents=[wiki_search_agent, duck_search_agent])
3. Test the Manager Agent with a TaskPackage
from agentlite.commons import TaskPackage
test_task = "what is salesforce famous for?"
test_task_pack = TaskPackage(instruction=test_task, task_creator="User")
response = search_manager(test_task_pack)
print(response)
running the test in terminal. You will see the running output like following:
Agent search_manager receives the following TaskPackage:
[
Task ID: 6f6bffdd-1ba8-4f7c-b326-8f409865fef0
Instruction: what is salesforce famous for?
]
====search_manager starts execution on TaskPackage 6f6bffdd-1ba8-4f7c-b326-8f409865fef0====
Agent search_manager takes 0-step Action:
{
name: wiki_search_agent
params: {'Task': 'What is salesforce famous for?'}
}
π Tutorials
- Building Search Agent
- Building a Multi-Agent Searching System
- Two Agent in Chess Game
- Math Problem Solving
- Interactive Image Understanding
- Multi_LLM_QA
- Search_and_Paint
- Philosophers_chatting
For detailed examples and tutorials on how to utilize AgentLite for your research or projects, please visit the tutorials directory.
π¬ Benchmark
If you find our paper or code useful, please cite
@misc{liu2024agentlite,
title={AgentLite: A Lightweight Library for Building and Advancing Task-Oriented LLM Agent System},
author={Zhiwei Liu and Weiran Yao and Jianguo Zhang and Liangwei Yang and Zuxin Liu and Juntao Tan and Prafulla K. Choubey and Tian Lan and Jason Wu and Huan Wang and Shelby Heinecke and Caiming Xiong and Silvio Savarese},
year={2024},
eprint={2402.15538},
archivePrefix={arXiv},
primaryClass={cs.MA}
}
- We use some great tools in Langchain to build the examples and the library LLM call
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to [email protected]