Skip to content

Commit

Permalink
Add remaining files (#43)
Browse files Browse the repository at this point in the history
  • Loading branch information
heyodai authored Nov 9, 2023
1 parent be711c1 commit 0a3af1d
Show file tree
Hide file tree
Showing 9 changed files with 1,080 additions and 33 deletions.
Binary file added .DS_Store
Binary file not shown.
136 changes: 112 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,35 @@
# Magic Commit! ✨
# Magic Commit! ✨ 🍰

`magic-commit` is a command-line tool for writing your commit messages. It pings OpenAI's GPT-3 API to generate commit messages based on your commit history.
<img src="splash-image.png" alt="Magic Commit" width="400"/>

`magic-commit` writes your commit messages with AI.

It's available as a command-line tool currently. There is an experimental VSCode extension in alpha, which you can read about in `Experiments > VSCode Extension` below.

## Table of Contents

- [Installation](#installation)
- [Setup](#setup)
- [Usage](#usage)
- [Experiments](#experiments)
- [VSCode Extension](#vscode-extension)
- [Llama2 Model](#llama2-model)
- [Developer Notes](#developer-notes)
- [Building the Command-Line Tool](#building-the-command-line-tool)
- [Building the VSCode Extension](#building-the-vscode-extension)
- [Publishing to PyPI](#publishing-to-pypi)
- [Unit Tests](#unit-tests)

## Installation

All platforms via [PyPI](https://pypi.org/project/magic-commit/)
To install the command-line tool, [PyPI](https://pypi.org/project/magic-commit/) is the easiest way:
```bash
pip install magic-commit
```

## Setup

You'll need to set up an OpenAI account and get an API key. You can do that on [OpenAI's website](https://platform.openai.com/account/api-keys).
You'll need to set up an OpenAI account and [get an API key](https://platform.openai.com/account/api-keys).

Once you have a key, add it to `magic-commit` like so:
```bash
Expand All @@ -22,44 +40,114 @@ magic-commit -k <your-key-here>

Running `magic-commit` is straightforward:
```bash
magic-commit # will run in your current directory
>>> [your commit message] # automatically copied to your clipboard
>>> magic-commit # will run in your current directory
[your commit message] # automatically copied to your clipboard
```

To specify a directory:
To see all the options, run:
```bash
magic-commit -d <path-to-git-repo>
>>> magic-commit --help

usage: magic-commit [-h] [-d DIRECTORY] [-m MODEL] [-k API_KEY] [--set-model MODEL_NAME] [--no-copy] [--no-load] [-t TICKET] [-s START] [--llama LLAMA]

Generate commit messages with OpenAI’s GPT.

optional arguments:
-h, --help show this help message and exit
-d DIRECTORY, --directory DIRECTORY
Specify the git repository directory
-m MODEL, --model MODEL
Specify the OpenAI GPT model
-k API_KEY, --key API_KEY
Set your OpenAI API key
--set-model MODEL_NAME
Set the default OpenAI GPT model
--no-copy Do not copy the commit message to the clipboard
--no-load Do not show loading message
-t TICKET, --ticket TICKET
Request that the provided GitHub issue be linked in the commit message
-s START, --start START
Provide the start of the commit message
--llama LLAMA Pass a localhost Llama2 server as a replacement for OpenAI API
```

For help:
```bash
magic-commit -h # or --help
```
For models, note that:
- You need to specify an [OpenAI GPT model](https://platform.openai.com/docs/models).
- e.g. `gpt-3.5-turbo-0301`, or `gpt-4`
- There is an experimental mode which uses Meta's Llama2 models instead.
- (see `Experiments > Llama2 Model` below)
- Your OpenAI account needs to have access to the model you specify.
- i.e. Don't specify `gpt-4` if you don't have access to it.

## Specifying a model
## Experiments

To change the model for this run:
### VSCode Extension

Currently in "alpha" status (v 0.0.3). It works, completely, but we need to address the following:

- [ ] Write automated tests
- [ ] Fix any [known bugs](https://github.com/heyodai/magic-commit/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
- [ ] Write documentation
- [ ] Officially publish to the VSCode Marketplace

### Llama2 Model

Llama2 is a free alternative to OpenAI's `GPT-3.5`, created by Meta (Facebook). A long-term goal of `magic-commit` is to support Llama2 fully, allowing you to use it without needing to pay OpenAI or send any potentially sensitive data to them.

To that end, you can pass a running `localhost` Llama2 server to `magic-commit` like so:
```bash
magic-commit -m <model-name>
magic-commit --llama http://localhost:8080 # or whatever port you're using
```

To change the model globally:
Note that you'll need to have a running Llama2 server. If you're on MacOS, I found [these instructions](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) from the `llama-cpp-python` project fairly easy to follow.

In the future, the end goal is to seamlessly support both OpenAI and Llama2 models, and to allow you to switch between them with a simple flag.

### LoRA Fine-Tuned Model

Llama2 models capable of running on a normal computer have to be fairly small, e.g. 7 billion parameters. This is a lot, but it's a far cry from the 175 billion parameters of OpenAI's `GPT-3.5` model. Performance for this task out-of-the-box is not great.

However, there is hope. Low-Rank Adaptation (LoRA) is a technique for specializing a large model to a smaller one. To quote the [research paper](https://arxiv.org/abs/2106.09685):

> Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3
I do believe that we can potentially get GPT-3.5 level of quality while running on a laptop. You can see my experiments with this in the `lora-experiments` folder. If you have any ideas or suggestions, please reach out!

## Developer Notes

Please feel free to open a GitHub issue, submit a pull request, or to reach out if you have any questions or suggestions!

### Building the Command-Line Tool

Note: This is referring to a local development build. For production, see `Publishing to PyPI` below.

```bash
magic-commit --set_model <model-name>
cd cli/magic_commit
pip install -e . # install the package in editable mode
```

For models, note that:
- You need to specify an [OpenAI GPT model](https://platform.openai.com/docs/models).
- e.g. `gpt-3.5-turbo-0301`, or `gpt-4`
- Your account needs to have access to the model you specify.
### Building the VSCode Extension

## Contributing
```bash
cd vscode/magic-commit
npm install vsce # if you don't have it already
vsce package # creates a .vsix file
```

Feel free to open an issue or submit a pull request.
### Publishing to PyPI

To publish a new version to PyPI:
```bash
pip install twine wheel # if you don't have it already
cd cli/magic_commit
pip install twine wheel
python setup.py sdist bdist_wheel # build the package
twine upload dist/* # upload to PyPI
```

### Unit Tests

To run the unit tests:
```bash
cd cli/magic_commit/tests
pytest
```
8 changes: 5 additions & 3 deletions cli/magic_commit/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,16 +40,17 @@ def main() -> None:
action="store_true",
help="Do not copy the commit message to the clipboard",
)
parser.add_argument(
"--no-load", action="store_true", help="Do not show loading message"
)
parser.add_argument(
"-t",
"--ticket",
help="Request that the provided GitHub issue be linked in the commit message",
type=int,
)
parser.add_argument("-s", "--start", help="Provide the start of the commit message")
parser.add_argument(
"--no-load", action="store_true", help="Do not show loading message"
)
parser.add_argument("--llama", help="Pass a localhost Llama2 server as a replacement for OpenAI API")

# Decide what to do based on the arguments
args = parser.parse_args()
Expand Down Expand Up @@ -86,6 +87,7 @@ def main() -> None:
ticket=ticket,
start=start,
show_loading_message=show_loading_message,
llama2_url=args.llama,
)

print(results)
Expand Down
60 changes: 55 additions & 5 deletions cli/magic_commit/magic_commit.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
import itertools
import time
import sys
import requests

import openai
from jinja2 import Environment, PackageLoader
Expand All @@ -21,9 +22,14 @@ class GitRepositoryError(Exception):
class OpenAIKeyError(Exception):
"""Custom exception for OpenAI API key errors."""



class Llama2ServerError(Exception):
"""Custom exception for Llama2 server errors."""
pass



def is_git_repository(directory: str) -> bool:
"""
Check if a directory is a Git repository.
Expand Down Expand Up @@ -135,7 +141,7 @@ def get_commit_messages(directory: str) -> str:


def generate_commit_message(
diff: str, start: str, ticket: str, api_key: str, model: str
diff: str, start: str, ticket: str, api_key: str, model: str, llama2_url: str = None
) -> str:
"""
Generate a commit message.
Expand All @@ -152,6 +158,8 @@ def generate_commit_message(
The OpenAI API key.
model : str
The OpenAI GPT model to use.
llama2_url: str
(Optional) The URL of the Llama2 server to use.
Returns
-------
Expand All @@ -176,9 +184,18 @@ def generate_commit_message(
{"role": "system", "content": system_msg},
{"role": "user", "content": user_msg},
]
openai.api_key = api_key
response = openai.ChatCompletion.create(model=model, messages=messages)
response = response.choices[0].message.content.strip()

# Determine which service to use
if llama2_url:
# Call the Llama2 server
response = call_llama2_server(llama2_url, messages)
print(response)
response = response['choices'][0]['message']['content'].strip()
else:
# Use OpenAI's service
openai.api_key = api_key
response = openai.ChatCompletion.create(model=model, messages=messages)
response = response.choices[0].message.content.strip()

# Strip the first line of response
# Assign it to start if it is empty
Expand All @@ -193,6 +210,36 @@ def generate_commit_message(
return render_final_template(start, response, ticket).strip()



def call_llama2_server(url: str, messages: list) -> dict:
"""
Call the Llama2 server.
Parameters
----------
url : str
The URL of the Llama2 server.
messages : list
The messages to send to the server.
Returns
-------
dict
The response from the server.
Raises
------
Llama2ServerError
If an error occurs while connecting to the server.
"""
try:
response = requests.post(url, json={"messages": messages})
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
raise Llama2ServerError(f"An error occurred while connecting to the Llama2 server: {e}")


def render_template(message: str, template_name: str) -> str:
"""
Render the commit message template.
Expand Down Expand Up @@ -275,6 +322,7 @@ def run_magic_commit(
api_key: str,
model: str,
show_loading_message: bool,
llama2_url: str = None
) -> str:
"""
Generate a commit message and return it.
Expand All @@ -293,6 +341,8 @@ def run_magic_commit(
The OpenAI GPT model to use.
show_loading_message : bool
Whether or not to show the loading animation.
llama2_url: str
(Optional) The URL of the Llama2 server to use.
Returns
-------
Expand All @@ -309,7 +359,7 @@ def run_magic_commit(
diff = run_git_diff(directory)
if not check_git_status(directory): # Check if there are staged changes
return "⛔ Warning: No staged changes detected. Please stage some changes before running magic-commit."
commit_message = generate_commit_message(diff, start, ticket, api_key, model)
commit_message = generate_commit_message(diff, start, ticket, api_key, model, llama2_url)
finally:
# Ensure the loading animation stops
if show_loading_message:
Expand Down
2 changes: 1 addition & 1 deletion cli/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

setup(
name="magic-commit",
version="0.5.1",
version="0.6.1",
packages=find_packages(),
include_package_data=True, # This line is needed to include non-code files
package_data={
Expand Down
Loading

0 comments on commit 0a3af1d

Please sign in to comment.