Skip to content

Guide: TurtleSim Demo

Rob Royce edited this page Aug 27, 2024 · 13 revisions

TurtleSim Demo with ROSA

This guide will walk you through setting up and running the TurtleSim demo using the ROSA (Robot Operating System Agent) framework.

Prerequisites

  • Docker (for running the demo script)

Setup

  1. Clone the ROSA repository:

    git clone https://github.com/nasa-jpl/rosa.git
    cd rosa
    
  2. Configure the LLM in src/turtle_agent/scripts/llm.py:

    Option A: Using AzureChatOpenAI (Default Configuration)

    • If you're using Azure OpenAI, you'll need to set up the following environment variables in the rosa/.env file:
      APIM_SUBSCRIPTION_KEY=your_subscription_key
      AZURE_TENANT_ID=your_tenant_id
      AZURE_CLIENT_ID=your_client_id
      AZURE_CLIENT_SECRET=your_client_secret
      DEPLOYMENT_ID=your_deployment_id
      API_VERSION=your_api_version
      API_ENDPOINT=your_api_endpoint
      
    • The get_llm function in llm.py will use these variables with the get_env_variable function.

    Option B: Using OpenAI's API

    • To use OpenAI's API instead of Azure, modify the get_llm function in src/turtle_agent/scripts/llm.py:
      from langchain_openai import ChatOpenAI
      
      def get_llm(streaming: bool = False):
          return ChatOpenAI(
              model_name="gpt-4o",  # or your preferred model
              streaming=streaming,
              openai_api_key=get_env_variable("OPENAI_API_KEY")
          )
    • In the rosa/.env file, set the OPENAI_API_KEY:
      OPENAI_API_KEY=your_openai_api_key
      

    Choose the option that best fits your setup and preferences.

  3. Run the demo script:

    ./demo.sh
    

    This script sets up the necessary Docker environment for running the TurtleSim.

  4. Build and start the turtle agent:

    catkin build && source devel/setup.bash && roslaunch turtle_agent agent.launch
    

    The agent.launch file allows you to configure the streaming parameter:

    <arg name="streaming" default="true" />

    Set this to false if you prefer non-streaming responses:

    roslaunch turtle_agent agent.launch streaming:=false
    

Usage

Once the agent is running, you can interact with it using natural language commands. Here are some example queries:

  • "Give me a ROS tutorial using the turtlesim."
  • "Show me how to move the turtle forward."
  • "Draw a 5-point star using the turtle."
  • "Teleport to (3, 3) and draw a small hexagon."
  • "Give me a list of ROS nodes and their topics."
  • "Change the background color to light blue and the pen color to red."

You can also use the following commands:

  • help: Display help information
  • examples: Choose from a list of example queries
  • clear: Clear the chat history
  • exit: Exit the agent

Troubleshooting

  • If you encounter environment variable errors, make sure all required variables are set in your .env file or system environment.
  • For ROS-related issues, check that your ROS environment is properly set up and the ROS master is running.
  • If you're having trouble with the LLM, verify your API keys and endpoints are correct and that you have the necessary permissions.

For more detailed information, refer to the ROSA documentation and the ROS TurtleSim tutorials.