Skip to content

Commit

Permalink
support personal rapidapi account
Browse files Browse the repository at this point in the history
  • Loading branch information
pooruss committed Aug 4, 2023
1 parent c6643b2 commit 34bbf3d
Show file tree
Hide file tree
Showing 7 changed files with 151 additions and 17 deletions.
65 changes: 60 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,13 +237,15 @@ deepspeed --master_port=20001 toolbench/train/train_long_seq_lora.py \
```


## Inference
First prepare your toolbench key:
## Inference With Our RapidAPI Server
Please fill out the [form](https://forms.gle/oCHHc8DQzhGfiT9r6) first and after reviewing we will send you the toolbench key. Then prepare your toolbench key by:
```bash
export TOOLBENCH_KEY="your_toolbench_key"
```

Then run the following commands:
### For ToolLLaMA

To inference with ToolLLaMA, run the following commands:
```bash
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
Expand All @@ -258,7 +260,9 @@ python toolbench/inference/qa_pipeline.py \
--toolbench_key $TOOLBENCH_KEY
```

For **lora** version:
To use your own RapidAPI key, run:

For **ToolLLaMA-LoRA**:
```bash
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
Expand All @@ -275,7 +279,7 @@ python toolbench/inference/qa_pipeline.py \
--toolbench_key $TOOLBENCH_KEY
```

For lora version under **open-domain** setting, run:
For ToolLLaMA-LoRA under **open-domain** setting, run:
```bash
export PYTHONPATH=./
python toolbench/inference/qa_pipeline_open_domain.py \
Expand All @@ -295,6 +299,57 @@ python toolbench/inference/qa_pipeline_open_domain.py \
--toolbench_key $TOOLBENCH_KEY
```

### For OpenAI Models
To use ChatGPT, run:
```bash
export TOOLBENCH_KEY=""
export OPENAI_KEY=""
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model chatgpt_function \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file data/answer/chatgpt_dfs \
--toolbench_key $TOOLBENCH_KEY
```

To use Text-Davinci-003, run:
```bash
export TOOLBENCH_KEY=""
export OPENAI_KEY=""
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model davinci \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file data/answer/davinci_dfs \
--toolbench_key $TOOLBENCH_KEY
```

## Inference With Your Own RapidAPI Account
To do inference with customized RapidAPI account, pass your **rapidapi key** and specify the `use_rapidapi_key` argument in the script:
```bash
export RAPIDAPI_KEY=""
export OPENAI_KEY=""
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model chatgpt_function \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file data/answer/chatgpt_dfs \
--rapidapi_key $RAPIDAPI_KEY \
--use_rapidapi_key
```

## Setting up and running the interface
ToolBench contains a Web UI based on [Chatbot UI](https://github.com/mckaywrigley/chatbot-ui), forked to include the use of tools in the interface. It comes in two parts: the backend server, and [chatbot-ui-toolllama](https://github.com/lilbillybiscuit/chatbot-ui-toolllama). Here is a [video demo](assets/toolbench-demo.mp4).

Expand Down
58 changes: 54 additions & 4 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,13 +241,13 @@ deepspeed --master_port=20001 toolbench/train/train_long_seq_lora.py \
```


## Inference
首先准备您的ToolBench key:
## 用我们的RapidAPI服务进行推理
请先填写[问卷](https://forms.gle/oCHHc8DQzhGfiT9r6),我们会尽快审核然后给您发送toolbench key。然后初始化您的toolbench key:
```bash
export TOOLBENCH_KEY="your_toolbench_key"
```

然后用以下命令做inference:
### ToolLLaMA
用以下命令用ToolLLaMA做推理:
```bash
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
Expand Down Expand Up @@ -298,6 +298,56 @@ python toolbench/inference/qa_pipeline_open_domain.py \
--output_answer_file data/answer/toolllama_lora_dfs_open_domain \
--toolbench_key $TOOLBENCH_KEY
```
### OpenAI模型
用ChatGPT:
```bash
export TOOLBENCH_KEY=""
export OPENAI_KEY=""
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model chatgpt_function \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file data/answer/chatgpt_dfs \
--toolbench_key $TOOLBENCH_KEY
```

用Text-Davinci-003:
```bash
export TOOLBENCH_KEY=""
export OPENAI_KEY=""
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model davinci \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file data/answer/davinci_dfs \
--toolbench_key $TOOLBENCH_KEY
```

## 用您自己的RapidAPI账号做推理
要用定制化的RapidAPI账号进行推理,请在脚本中传入您的**rapidapi key**并指定`use_rapidapi_key`参数:
```bash
export RAPIDAPI_KEY=""
export OPENAI_KEY=""
export PYTHONPATH=./
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model chatgpt_function \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file data/answer/chatgpt_dfs \
--rapidapi_key $RAPIDAPI_KEY \
--use_rapidapi_key
```

## Setting up and running the interface

Expand Down
17 changes: 17 additions & 0 deletions scripts/inference_chatgpt_pipeline_w_rapidapi_key.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
export RAPIDAPI_KEY=""
export OUTPUT_DIR="data/answer/chatgpt_dfs"
export OPENAI_KEY=""
export PYTHONPATH=./

mkdir $OUTPUT_DIR
python toolbench/inference/qa_pipeline.py \
--tool_root_dir data/toolenv/tools/ \
--backbone_model chatgpt_function \
--openai_key $OPENAI_KEY \
--max_observation_length 1024 \
--method DFS_woFilter_w2 \
--input_query_file data/instruction/inference_query_demo.json \
--output_answer_file $OUTPUT_DIR \
--rapidapi_key $RAPIDAPI_KEY \
--use_rapidapi_key

22 changes: 14 additions & 8 deletions toolbench/inference/Downstream_tasks/rapidapi.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ def __init__(self, query_json, tool_descriptions, retriever, args, process_id=0)

self.tool_root_dir = args.tool_root_dir
self.toolbench_key = args.toolbench_key
self.rapidapi_key = args.rapidapi_key
self.use_rapidapi_key = args.use_rapidapi_key
self.service_url = "http://8.218.239.54:8080/rapidapi"
self.max_observation_length = args.max_observation_length
self.observ_compress_method = args.observ_compress_method
Expand Down Expand Up @@ -328,14 +330,18 @@ def _step(self, action_name="", action_input=""):
}
if self.process_id == 0:
print(colored(f"query to {self.cate_names[k]}-->{self.tool_names[k]}-->{action_name}",color="yellow"))
response = requests.post(self.service_url, json=payload,timeout=30)
if response.status_code != 200:
return json.dumps({"error": f"request invalid, data error. status_code={response.status_code}", "response": ""}), 12
try:
response = response.json()
except:
print(response)
return json.dumps({"error": f"request invalid, data error", "response": ""}), 12
if self.use_rapidapi_key:
payload["rapidapi_key"] = self.rapidapi_key
response = get_rapidapi_response(payload)
else:
response = requests.post(self.service_url, json=payload,timeout=30)
if response.status_code != 200:
return json.dumps({"error": f"request invalid, data error. status_code={response.status_code}", "response": ""}), 12
try:
response = response.json()
except:
print(response)
return json.dumps({"error": f"request invalid, data error", "response": ""}), 12
# 1 Hallucinating function names
# 4 means that the model decides to pruning by itself
# 5 represents api call timeout
Expand Down
2 changes: 2 additions & 0 deletions toolbench/inference/qa_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@
parser.add_argument('--input_query_file', type=str, default="", required=False, help='input path')
parser.add_argument('--output_answer_file', type=str, default="",required=False, help='output path')
parser.add_argument('--toolbench_key', type=str, default="",required=False, help='your toolbench key to request rapidapi service')
parser.add_argument('--rapidapi_key', type=str, default="",required=False, help='your rapidapi key to request rapidapi service')
parser.add_argument('--use_rapidapi_key', action="store_true", help="To use customized rapidapi service or not.")

args = parser.parse_args()

Expand Down
2 changes: 2 additions & 0 deletions toolbench/inference/qa_pipeline_open_domain.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@
parser.add_argument('--input_query_file', type=str, default="", required=False, help='input path')
parser.add_argument('--output_answer_file', type=str, default="",required=False, help='output path')
parser.add_argument('--toolbench_key', type=str, default="",required=False, help='your toolbench key to request rapidapi service')
parser.add_argument('--rapidapi_key', type=str, default="",required=False, help='your rapidapi key to request rapidapi service')
parser.add_argument('--use_rapidapi_key', action="store_true", help="To use customized rapidapi service or not.")

args = parser.parse_args()

Expand Down
2 changes: 2 additions & 0 deletions toolbench/inference/toolbench_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,8 @@ def get_args(self):
parser.add_argument('--input_query_file', type=str, default="", required=False, help='input path')
parser.add_argument('--output_answer_file', type=str, default="", required=False, help='output path')
parser.add_argument('--toolbench_key', type=str, default="", required=False, help='your toolbench key')
parser.add_argument('--rapidapi_key', type=str, default="",required=False, help='your rapidapi key to request rapidapi service')
parser.add_argument('--use_rapidapi_key', action="store_true", help="To use customized rapidapi service or not.")

args = parser.parse_args()
return args
Expand Down

0 comments on commit 34bbf3d

Please sign in to comment.