Skip to content

Guide: TurtleSim Demo

Rob Royce edited this page Aug 30, 2024 · 13 revisions

This guide will walk you through setting up and running the TurtleSim demo using the ROSA (Robot Operating System Agent) framework.

Important

The TurtleSim demo currently supports ROS1 only. Check back later for a ROS2 implementation. In the meantime, you can run the ROS1 version in Docker, no need to mess up your existing ROS2 installation!

turtle_demo.mov

Prerequisites

  • Docker
  • X11 server (for GUI support)

Setup

  1. Clone the ROSA repository:

    git clone https://github.com/nasa-jpl/rosa.git
    cd rosa
    
  2. Configure the LLM:

    ROSA-TurtleSim supports both OpenAI API and Azure OpenAI. The LLM configuration is handled in the rosa/src/turtle_agent/scripts/llm.py file.

    • For Azure OpenAI, set the required environment variables as specified in the llm.py file.
    • For OpenAI API, set the OPENAI_API_KEY in your .env file. You will also need to change the get_llm function in llm.py to return a ChatOpenAI object.

    For more detailed instructions, refer to the Model Configuration guide.

  3. Run the demo script:

    ./demo.sh
    

    This script sets up the necessary Docker environment for running the TurtleSim.

  4. Once inside the Docker container, build and start the turtle agent:

    start
    

    You can enable streaming mode by passing the streaming argument:

    start streaming:=true
    

Usage

Once the agent is running, you can interact with it using natural language commands. Here are some example queries:

  • "Give me a ROS tutorial using the turtlesim."
  • "Show me how to move the turtle forward."
  • "Draw a 5-point star using the turtle."
  • "Teleport to (3, 3) and draw a small hexagon."
  • "Give me a list of nodes, topics, services, params, and log files."
  • "Change the background color to light blue and the pen color to red."

You can also use the following commands:

  • help: Display help information
  • examples: Choose from a list of example queries
  • clear: Clear the chat history
  • info: Show details about tool usage and events (available after a query in streaming mode)
  • exit: Exit the agent

Features

  • Streaming mode: See the agent's response in real-time
  • Tool usage information: Get insights into how the agent arrived at its answer
  • Custom tools: The agent includes a "cool_turtle_tool" and a "blast_off" tool for fun interactions

Troubleshooting

  • If you encounter X11 forwarding issues, make sure your X11 server is running and properly configured.
  • For ROS-related issues, check that the ROS environment is properly set up within the Docker container.
  • If you're having trouble with the LLM, verify your API keys and endpoints are correct and that you have the necessary permissions.

For more detailed information, refer to the ROSA documentation and the ROS TurtleSim tutorials.