Skip to content

Commit

Permalink
Added v 1.5
Browse files Browse the repository at this point in the history
  • Loading branch information
haseeb-heaven committed Oct 12, 2023
1 parent 50a061b commit 01f6d20
Show file tree
Hide file tree
Showing 13 changed files with 172 additions and 103 deletions.
3 changes: 3 additions & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
HUGGINGFACE_API_KEY="Your API Key here"
PALM_API_KEY="Your API Key here"
OPENAI_API_KEY="Your API Key here"
5 changes: 1 addition & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,4 @@
*.log
*.json
history/*
output/*
.*
.env
.config
output/*
50 changes: 39 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ Designed with versatility in mind, **Open-Code-Interpreter** works seamlessly on

## **Why this is Unique Interpreter?**

The distinguishing feature of this interpreter, as compared to others, is its **commitment to remain free 🆓**. It does not require any model downloads or follow to **tedious processes** or methods for execution. It is designed to be **simple** and **free** for all users and works on all major OS **_Windows,Linux,MacOS_**
The distinguishing feature of this interpreter, as compared to others, is its **commitment to remain free 🆓**. It does not require any model to download or follow to **tedious processes** or methods for execution. It is designed to be **simple** and **free** for all users and works on all major OS **_Windows,Linux,MacOS_**

## **Future Plans:**
- ~~🎯 We plan to provide **GPT 3.5** models for free.~~ 🎯 We have added support for **GPT 3.5** models using **Heaven-GPT**.
- 🌐 .~~We plan to provide **Vertex AI (PALM 2)** models for free..~~ We have added support for **PALM-2** model using [**LiteLLM**](https://litellm.ai/)
- ~~🎯 We plan to integrate **GPT 3.5** models.~~ *🎯 We have added support for **GPT 3.5** models*.
- 🌐 .~~We plan to provide **Vertex AI (PALM 2)** models..~~ We have added support for **PALM-2** model using [**LiteLLM**](https://litellm.ai/)
- 🔗 ~~We plan to provide API Base change using [**LiteLLM**](https://litellm.ai/)~~. Added Support for [**LiteLLM**](https://litellm.ai/)
- 🤖 More **Hugging Face** models with free-tier.
- 💻 Support for more **Operating Systems**.
Expand Down Expand Up @@ -53,7 +53,7 @@ cd open-code-interpreter
```bash
pip install -r requirements.txt
```
3. Setup the Keys required.(**_Not needed for GPT 3.5_**)
3. Setup the Keys required.

## HUGGING FACE API Key setup.

Expand All @@ -79,6 +79,23 @@ echo "HUGGINGFACE_API_KEY=Your Access Token" > .env
echo "PALM_API_KEY=Your API Key" > .env
```

## OpenAI API Key setup.

*Step 1:* **Obtain the OpenAI API key.**

*Step 2:* Visit the following URL: *https://platform.openai.com/account/api-keys*

*Step 3:* Sign up for an account or log in if you already have one.

*Step 4:* Navigate to the API section in your account dashboard.

*Step 5:* Click on the **Create New Key** button.

*Step 6:* The generated key is your API key. Please make sure to **copy** it and **paste** it in the required field below.
```bash
echo "OPENAI_API_KEY=Your API Key" > .env
```

4. Run the interpreter with Python:</br>
### Running with Python.
```bash
Expand Down Expand Up @@ -162,7 +179,7 @@ python interpreter.py -m 'code-llama' -md 'code' -s -l 'javascript'
Code Interpreter Demo
[![code_interpreter_demo](https://img.youtube.com/vi/GGLNBfbN0oY/0.jpg)](https://youtube.com/shorts/GGLNBfbN0oY)

Example of GPT 3.5 Turbo based on **Heaven-GPT**.
Example of GPT 3.5 Turbo.
![chatgpt_command](https://github.com/haseeb-heaven/open-code-interpreter/blob/main/resources/chat-gpt-command.png?raw=true "GPT 3.5 Turbo Code")</br>

Example of PALM-2 based on **Google Vertex AI**.
Expand All @@ -181,7 +198,19 @@ Example of Mistral with code mode:
## ⚙️ **Settings**
You can customize the settings of the current model from the `.config` file. It contains all the necessary parameters such as `temperature`, `max_tokens`, and more.

If you want to add a new model from Hugging Face, follow these steps:
### **Steps to add your own custom API Server**
To integrate your own API server for OpenAI instead of the default server, follow these steps:
1. Navigate to the `Configs` directory.
2. Open the configuration file for the model you want to modify. This could be either `gpt-3.5-turbo.config` or `gpt-4.config`.
3. Add the following line at the end of the file:
```
api_base = https://my-custom-base.com
```
Replace `https://my-custom-base.com` with the URL of your custom API server.
4. Save and close the file.
Now, whenever you select the `gpt-3.5-turbo` or `gpt-4` model, the system will automatically use your custom server.

### **Steps to add new Hugging Face model**

1. 📋 Copy the `.config` file and rename it to `configs/hf-model-new.config`.
2. 🛠️ Modify the parameters of the model like `start_sep`, `end_sep`, `skip_first_line`.
Expand Down Expand Up @@ -230,20 +259,19 @@ If you're interested in contributing to **Open-Code-Interpreter**, we'd love to
**v1.2** - Added LiteLLM Support.</br>
**v1.3** - Added Gpt 3.5 Support.</br>
**v1.4** - Added PALM 2 Support.</br>
**v1.5** - Added Gpt 3.5/4 models official Support.</br>

## 📜 **License**

This project is licensed under the **MIT License**. For more details, please refer to the LICENSE file.

Please note the following additional licensing details:

- The **GPT 3.5 model** is provided by **Heaven-GPT** and is subject to its own permissive license. It is intended for use exclusively with this **code-interpreter**. More information can be found [here](https://heaven-gpt.haseebmir.repl.co/privacy).

- The **GPT Models** are sourced from the [gpt4free](https://github.com/xtekky/gpt4free) repository, which does not come with a license. Please ensure you agree to their terms as outlined in their [legal notice](https://github.com/xtekky/gpt4free/blob/main/LEGAL_NOTICE.md).
- The **GPT 3.5/4 models** are provided by **OpenAI** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use).

- The **PALM models** are officially supported by the **Google PALM 2 API** and come with their own license and support. Please ensure you understand and agree to their terms before using More information can be found [here](https://developers.generativeai.google/terms).
- The **PALM models** are officially supported by the **Google PALM 2 API**. These models have their own licensing terms and support. Please ensure you have read and agreed to their terms before using these models. More information can be found at [Google Generative AI's Terms of Service](https://developers.generativeai.google/terms).

- The **Hugging Face models** are provided by **Hugging Face Inc.** and are subject to their own license. Please ensure you understand and agree to their terms before using. More information can be found [here](https://huggingface.co/terms-of-service).
- The **Hugging Face models** are provided by **Hugging Face Inc.** and are governed by their own licensing terms. Please ensure you have read and agreed to their terms before using these models. More information can be found at [Hugging Face's Terms of Service](https://huggingface.co/terms-of-service).

## 🙏 **Acknowledgments**

Expand Down
24 changes: 12 additions & 12 deletions configs/code-llama-phind.config
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 1024
# The maximum number of new tokens that the model can generate.
max_tokens = 1024

# The start separator for the generated code.
start_sep = ```
# The start separator for the generated code.
start_sep = ```

# The end separator for the generated code.
end_sep = ```
# The end separator for the generated code.
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = True
# If True, the first line of the generated text will be skipped.
skip_first_line = True

# The model used for generating the code.
HF_MODEL = Phind/Phind-CodeLlama-34B-v2
# The model used for generating the code.
HF_MODEL = Phind/Phind-CodeLlama-34B-v2

27 changes: 15 additions & 12 deletions configs/gpt-3.5-turbo.config
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 2048
# The maximum number of new tokens that the model can generate.
max_tokens = 2048

# The start separator for the generated code.
start_sep = ```
# The start separator for the generated code.
start_sep = ```

# The end separator for the generated code.
end_sep = ```
# The end separator for the generated code.
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = True
# If True, the first line of the generated text will be skipped.
skip_first_line = True

# The model used for generating the code.
HF_MODEL = gpt-3.5-turbo
# The model used for generating the code.
HF_MODEL = gpt-3.5-turbo

# API Base is your own base for OpenAI.
api_base = https://heaven-gpt.haseebmir.repl.co
17 changes: 17 additions & 0 deletions configs/gpt-4.config
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 2048

# The start separator for the generated code.
start_sep = ```

# The end separator for the generated code.
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = True

# The model used for generating the code.
HF_MODEL = gpt-4
8 changes: 4 additions & 4 deletions configs/mistral-7b.config
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 1024
# The maximum number of new tokens that the model can generate.
max_tokens = 1024

# The start separator for the generated code.
start_sep = ```
Expand Down
24 changes: 12 additions & 12 deletions configs/palm-2.config
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 2048
# The maximum number of new tokens that the model can generate.
max_tokens = 2048

# The start separator for the generated code.
start_sep = ```
# The start separator for the generated code.
start_sep = ```

# The end separator for the generated code.
end_sep = ```
# The end separator for the generated code.
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = True
# If True, the first line of the generated text will be skipped.
skip_first_line = True

# The model used for generating the code.
HF_MODEL = palm/chat-bison
# The model used for generating the code.
HF_MODEL = palm/chat-bison
24 changes: 12 additions & 12 deletions configs/star-chat.config
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 1024
# The maximum number of new tokens that the model can generate.
max_tokens = 1024

# The start separator for the generated code.
start_sep = <|assistant|>
# The start separator for the generated code.
start_sep = <|assistant|>

# The end separator for the generated code.
end_sep = <|end|>
# The end separator for the generated code.
end_sep = <|end|>

# If True, the first line of the generated text will be skipped.
skip_first_line = False
# If True, the first line of the generated text will be skipped.
skip_first_line = False

# The model used for generating the code.
HF_MODEL = HuggingFaceH4/starchat-beta
# The model used for generating the code.
HF_MODEL = HuggingFaceH4/starchat-beta
20 changes: 19 additions & 1 deletion interpreter
Original file line number Diff line number Diff line change
@@ -1,4 +1,22 @@
#!/usr/bin/env python3

"""
This is the main file for the Open-Code-Interpreter.
It handles command line arguments and initializes the Interpreter.
Command line arguments:
--exec, -e: Executes the code generated from the user's instructions.
--save_code, -s: Saves the generated code.
--mode, -md: Selects the mode of operation. Choices are 'code', 'script', and 'command'.
--model, -m: Sets the model for code generation. Default is 'code-llama'.
--version, -v: Displays the version of the program.
--lang, -l: Sets the interpreter language. Default is 'python'.
--display_code, -dc: Displays the generated code in the output.
Author: HeavenHM
Date: 2023/10/12
"""

from libs.interpreter_lib import Interpreter
import argparse
import sys
Expand All @@ -10,7 +28,7 @@ def main():
parser.add_argument('--save_code', '-s', action='store_true', help='Save the generated code')
parser.add_argument('--mode', '-md', choices=['code', 'script', 'command'], help='Select the mode (`code` for generating code, `script` for generating shell scripts, `command` for generating single line commands)')
parser.add_argument('--model', '-m', type=str, default='code-llama', help='Set the model for code generation. (Defaults to gpt-3.5-turbo)')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.4')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.5')
parser.add_argument('--lang', '-l', type=str, default='python', help='Set the interpreter language. (Defaults to Python)')
parser.add_argument('--display_code', '-dc', action='store_true', help='Display the code in output')
args = parser.parse_args()
Expand Down
2 changes: 1 addition & 1 deletion interpreter.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def main():
parser.add_argument('--save_code', '-s', action='store_true', help='Save the generated code')
parser.add_argument('--mode', '-md', choices=['code', 'script', 'command'], help='Select the mode (`code` for generating code, `script` for generating shell scripts, `command` for generating single line commands)')
parser.add_argument('--model', '-m', type=str, default='code-llama', help='Set the model for code generation. (Defaults to gpt-3.5-turbo)')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.4')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.5')
parser.add_argument('--lang', '-l', type=str, default='python', help='Set the interpreter language. (Defaults to Python)')
parser.add_argument('--display_code', '-dc', action='store_true', help='Display the code in output')
args = parser.parse_args()
Expand Down
Loading

0 comments on commit 01f6d20

Please sign in to comment.