-
Notifications
You must be signed in to change notification settings - Fork 49
Guide: TurtleSim Demo
This guide will walk you through setting up and running the TurtleSim demo using the ROSA (Robot Operating System Agent) framework.
turtle_demo.mov
- Docker (for running the demo script)
-
Clone the ROSA repository:
git clone https://github.com/nasa-jpl/rosa.git cd rosa
-
Configure the LLM:
ROSA supports both OpenAI API and Azure OpenAI. Follow the instructions in the Model Configuration guide to set up your preferred LLM.
- For OpenAI API, you'll need to set the
OPENAI_API_KEY
in your.env
file. - For Azure OpenAI, you'll need to set several environment variables as described in the guide.
Make sure to create the appropriate LLM instance (either
ChatOpenAI
orAzureChatOpenAI
) and pass it to the ROSA instance in your code. - For OpenAI API, you'll need to set the
-
Run the demo script:
./demo.sh
This script sets up the necessary Docker environment for running the TurtleSim.
-
Build and start the turtle agent:
catkin build && source devel/setup.bash && roslaunch turtle_agent agent.launch
The
agent.launch
file allows you to configure the streaming parameter:<arg name="streaming" default="true" />
Set this to
false
if you prefer non-streaming responses:roslaunch turtle_agent agent.launch streaming:=false
Once the agent is running, you can interact with it using natural language commands. Here are some example queries:
- "Give me a ROS tutorial using the turtlesim."
- "Show me how to move the turtle forward."
- "Draw a 5-point star using the turtle."
- "Teleport to (3, 3) and draw a small hexagon."
- "Give me a list of ROS nodes and their topics."
- "Change the background color to light blue and the pen color to red."
You can also use the following commands:
-
help
: Display help information -
examples
: Choose from a list of example queries -
clear
: Clear the chat history -
exit
: Exit the agent
- If you encounter environment variable errors, make sure all required variables are set in your
.env
file or system environment as described in the Model Configuration guide. - For ROS-related issues, check that your ROS environment is properly set up and the ROS master is running.
- If you're having trouble with the LLM, verify your API keys and endpoints are correct and that you have the necessary permissions.
For more detailed information, refer to the ROSA documentation and the ROS TurtleSim tutorials.
Copyright (c) 2024. Jet Propulsion Laboratory. All rights reserved.