diff --git a/docs/module_guides/deploying/agents/modules.md b/docs/module_guides/deploying/agents/modules.md index f43d6b7de9331..1813a928c83da 100644 --- a/docs/module_guides/deploying/agents/modules.md +++ b/docs/module_guides/deploying/agents/modules.md @@ -42,6 +42,15 @@ maxdepth: 1 /examples/agent/react_agent_with_query_engine.ipynb ``` +## Additional Agents (available on LlamaHub) + +```{toctree} +--- +maxdepth: 1 +--- +LLMCompiler Agent Cookbook +``` + (lower-level-agent-api)= ## Lower-Level Agent API diff --git a/docs/module_guides/deploying/agents/root.md b/docs/module_guides/deploying/agents/root.md index 9a289aa3962eb..82dbbd3969a17 100644 --- a/docs/module_guides/deploying/agents/root.md +++ b/docs/module_guides/deploying/agents/root.md @@ -22,6 +22,7 @@ The reasoning loop depends on the type of agent. We have support for the followi - OpenAI Function agent (built on top of the OpenAI Function API) - a ReAct agent (which works across any chat/text completion endpoint). +- a LLMCompiler Agent (available as a [LlamaPack](https://llamahub.ai/l/llama_packs-agents-llm_compiler?from=llama_packs), [source repo](https://github.com/SqueezeAILab/LLMCompiler)) ### Tool Abstractions