-
Notifications
You must be signed in to change notification settings - Fork 49
Guide: TurtleSim Demo
This guide will walk you through setting up and running the TurtleSim demo using the ROSA (Robot Operating System Agent) framework.
Important
The TurtleSim demo currently supports ROS1 only. Check back later for a ROS2 implementation. In the meantime, you can run the ROS1 version in Docker, no need to mess up your existing ROS2 installation!
turtle_demo.mov
- Docker
- X11 server (for GUI support)
-
Clone the ROSA repository:
git clone https://github.com/nasa-jpl/rosa.git cd rosa
-
Configure the LLM:
ROSA-TurtleSim supports both OpenAI API and Azure OpenAI. The LLM configuration is handled in the
rosa/src/turtle_agent/scripts/llm.py
file.- For Azure OpenAI, set the required environment variables as specified in the
llm.py
file. - For OpenAI API, set the
OPENAI_API_KEY
in your.env
file. You will also need to change theget_llm
function inllm.py
to return aChatOpenAI
object.
For more detailed instructions, refer to the Model Configuration guide.
- For Azure OpenAI, set the required environment variables as specified in the
-
Run the demo script:
./demo.sh
This script sets up the necessary Docker environment for running the TurtleSim.
-
Once inside the Docker container, build and start the turtle agent:
start
You can enable streaming mode by passing the
streaming
argument:start streaming:=true
Once the agent is running, you can interact with it using natural language commands. Here are some example queries:
- "Give me a ROS tutorial using the turtlesim."
- "Show me how to move the turtle forward."
- "Draw a 5-point star using the turtle."
- "Teleport to (3, 3) and draw a small hexagon."
- "Give me a list of nodes, topics, services, params, and log files."
- "Change the background color to light blue and the pen color to red."
You can also use the following commands:
-
help
: Display help information -
examples
: Choose from a list of example queries -
clear
: Clear the chat history -
info
: Show details about tool usage and events (available after a query in streaming mode) -
exit
: Exit the agent
- Streaming mode: See the agent's response in real-time
- Tool usage information: Get insights into how the agent arrived at its answer
- Custom tools: The agent includes a "cool_turtle_tool" and a "blast_off" tool for fun interactions
- If you encounter X11 forwarding issues, make sure your X11 server is running and properly configured.
- For ROS-related issues, check that the ROS environment is properly set up within the Docker container.
- If you're having trouble with the LLM, verify your API keys and endpoints are correct and that you have the necessary permissions.
For more detailed information, refer to the ROSA documentation and the ROS TurtleSim tutorials.
Copyright (c) 2024. Jet Propulsion Laboratory. All rights reserved.