Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added an agent for travel assistant example #120

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 109 additions & 0 deletions examples/travel_assistant/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# Travel Recommendation with Loop Example

This example demonstrates how to use the framework for travel assistance tasks with loop functionality. The example code can be found in the `examples/travel_assistant` directory.
```bash
cd examples/travel_assistant
```

## Overview

This example implements an interactive travel assistant workflow that uses a loop-based approach to refine recommendations based on user feedback. The workflow consists of the following key components:

1. **Initial Travel Details Input**
- DestinationInput: Handles the input and processing of initial travel details such as destination, dates, and preferences
- Serves as the starting point for the travel planning process

2. **Interactive QA Loop with Preferences Integration**
- ScenicSpotQA: Conducts an interactive Q&A session to gather context and preferences
- Uses web search tool to fetch real-time weather data for the specified location
- ScenicSpotDecider: Evaluates if sufficient information has been collected based on:
- User preferences
- Real-time destination information
- Uses DoWhileTask to continue the loop until adequate information is gathered
- Loop terminates when ScenicSpotDecider returns decision=true

3. **Final Scenic Spot Recommendation**
- ScenicSpotRecommendation: Generates the final scenic spot suggestions based on:
- The initial input details
- Information collected during the Q&A loop
- Real-time destination information from web search
- Other context (preferences, special requirements, etc.)

4. **Workflow Flow**
```
Start -> Initial Travel Details Input -> ScenicSpotQA Loop (QA + Destination Info + Decision) -> Final Scenic Spot Recommendation -> End
```

The workflow leverages Redis for state management and the Conductor server for workflow orchestration. This architecture enables:
- Scenice spot recommendations
- Weather-aware outfit suggestions using real-time data
- Interactive refinement through structured Q&A
- Context-aware suggestions incorporating multiple factors
- Persistent state management across the workflow


## Prerequisites

- Python 3.10+
- Required packages installed (see requirements.txt)
- Access to OpenAI API or compatible endpoint
- Access to Bing API key for web search functionality to search real-time weather information for scenic spot recommendations (see configs/tools/all_tools.yml)
- Redis server running locally or remotely
- Conductor server running locally or remotely

## Configuration

The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:

1. Generate the container.yaml file:
```bash
python compile_container.py
```
This will create a container.yaml file with default settings under `examples/travel_assistant`.

2. Configure your LLM settings in `configs/llms/gpt.yml` and `configs/llms/text_res.yml`:
- Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
```bash
export custom_openai_key="your_openai_api_key"
export custom_openai_endpoint="your_openai_endpoint"
```
- Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file

3. Configure your Bing Search API key in `configs/tools/all_tools.yml`:
- Set your Bing API key through environment variable or by directly modifying the yml file
```bash
export bing_api_key="your_bing_api_key"
```

4. Update settings in the generated `container.yaml`:
- Modify Redis connection settings:
- Set the host, port and credentials for your Redis instance
- Configure both `redis_stream_client` and `redis_stm_client` sections
- Update the Conductor server URL under conductor_config section
- Adjust any other component settings as needed

## Running the Example

1. Run the scenic spot recommendation workflow:

For terminal/CLI usage:
```bash
python run_cli.py
```

For app/GUI usage:
```bash
python run_app.py
```


## Troubleshooting

If you encounter issues:
- Verify Redis is running and accessible
- Check your OpenAI API key and Bing API key are valid
- Ensure all dependencies are installed correctly
- Review logs for any error messages
- Confirm Conductor server is running and accessible
- Check Redis Stream client and Redis STM client configuration

Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
import json_repair
import re
from pathlib import Path
from typing import List
from pydantic import Field

from omagent_core.models.llms.base import BaseLLMBackend
from omagent_core.utils.registry import registry
from omagent_core.models.llms.prompt.prompt import PromptTemplate
from omagent_core.engine.worker.base import BaseWorker
from omagent_core.models.llms.prompt.parser import StrParser
from omagent_core.models.llms.openai_gpt import OpenaiGPTLLM
from omagent_core.utils.logger import logging


CURRENT_PATH = root_path = Path(__file__).parents[0]


@registry.register_worker()
class ScenicSpotDecider(BaseLLMBackend, BaseWorker):
"""Scenic spot decision processor that determines if enough information has been gathered.

This processor evaluates whether sufficient information exists to make a scenic spot recommendation
by analyzing user instructions, search results, and any feedback received. It uses an LLM to
make this determination.

If enough information is available, it returns success. Otherwise, it returns failed along with
feedback about what additional information is needed.

Attributes:
output_parser: Parser for string outputs from the LLM
llm: OpenAI GPT language model instance
prompts: List of system and user prompts loaded from template files
"""
llm: OpenaiGPTLLM
prompts: List[PromptTemplate] = Field(
default=[
PromptTemplate.from_file(
CURRENT_PATH.joinpath("sys_prompt.prompt"), role="system"
),
PromptTemplate.from_file(
CURRENT_PATH.joinpath("user_prompt.prompt"), role="user"
),
]
)

def _run(self, *args, **kwargs):
"""Process the current state to determine if a scenic spot recommendation can be made.

Retrieves the current user instructions, search information, and feedback from the
short-term memory. Uses the LLM to analyze this information and determine if
sufficient details exist to make a recommendation.

Args:
args: Variable length argument list
kwargs: Arbitrary keyword arguments

Returns:
dict: Contains 'decision' key with:
- True if enough information exists to make a recommendation
- False if more information is needed, also stores feedback about what's missing
"""
# Retrieve context data from short-term memory, using empty lists as defaults
if self.stm(self.workflow_instance_id).get("user_instruction"):
user_instruct = self.stm(self.workflow_instance_id).get("user_instruction")
else:
user_instruct = []

if self.stm(self.workflow_instance_id).get("search_info"):
search_info = self.stm(self.workflow_instance_id).get("search_info")
else:
search_info = []

if self.stm(self.workflow_instance_id).get("feedback"):
feedback = self.stm(self.workflow_instance_id).get("feedback")
else:
feedback = []

# Query LLM to analyze available information
chat_complete_res = self.simple_infer(
instruction=str(user_instruct),
previous_search=str(search_info),
feedback=str(feedback)
)
content = chat_complete_res["choices"][0]["message"].get("content")
content = self._extract_from_result(content)
logging.info(content)

# Return decision and handle feedback if more information is needed
if content.get("decision") == "ready":
return {"decision": True}
else:
feedback.append(content["reason"])
self.stm(self.workflow_instance_id)["feedback"] = feedback
return {"decision": False}

def _extract_from_result(self, result: str) -> dict:
try:
pattern = r"```json\s+(.*?)\s+```"
match = re.search(pattern, result, re.DOTALL)
if match:
return json_repair.loads(match.group(1))
else:
return json_repair.loads(result)
except Exception as error:
raise ValueError("LLM generation is not valid.")
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
You are a helpful travel advisor assistant that gathers information to help users get travel recommendations based on their needs and preferences.

You will receive:
- User instructions and requests
- Previously searched information (like weather conditions or local attractions)
- Feedback from the last travel recommendation decider about what additional information is still needed

Your task is to analyze all the provided information and make a decision about whether enough details have been gathered to generate a good travel recommendation.

You should respond in this format:
{
"decision": "ready" or "need_more_info",
"reason": "If need_more_info, explain what specific information is still missing and why it's important. If ready, no explanation needs to be provided."
}

First and foremost, carefully analyze the user's instruction. If the user explicitly states they want an immediate recommendation or indicates they don't want to answer more questions, you should return "ready" regardless of missing information.

When evaluating if you have enough information (only if the user hasn't requested immediate recommendations), consider:
1. Do you know the destination city/country?
2. Do you understand the specific purpose of travel (e.g., vacation, business)?
3. Is the weather information provided? (The weather should be requested only once. As long as any weather information is provided, this requirement is satisfied.)
4. Are the user's travel preferences and constraints clear (e.g., adventure, relaxation)?
5. Do you have enough context about accommodation preferences?
6. Is there clarity about budget constraints?
7. Do you know the travel dates and duration?
8. Are there specific attractions or activities the user wants to include?
9. Other specific details that would help with the recommendation

Note: If any weather information is already provided in the Previously searched information, do not request weather information again.

Your response must be in valid JSON format. Be specific in your reasoning about what information is missing or why the collected information is sufficient.
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
Now, it's your turn to complete the task.

Input Information:
- User instructions and requests: {{instruction}}
- Previously searched information: {{previous_search}}
- Feedback from last decider: {{feedback}}
Empty file.
Loading