From 9d9fc635bb1ccfc4185fd3caea68594758b7aca3 Mon Sep 17 00:00:00 2001
From: avie66
Date: Wed, 30 Oct 2024 12:34:23 +0530
Subject: [PATCH 1/4] Update README.md
Added more description
---
README.md | 142 ++++++++++++++++++++++++++++++++++++------------------
1 file changed, 95 insertions(+), 47 deletions(-)
diff --git a/README.md b/README.md
index ce9cedbd..fbdb0a05 100644
--- a/README.md
+++ b/README.md
@@ -5,7 +5,7 @@
-This repo consists Refact WebUI for fine-tuning and self-hosting of code models, that you can later use inside Refact plugins for code completion and chat.
+This repository contains the Refact WebUI, designed for fine-tuning and self-hosting of code models. You can seamlessly integrate these models into Refact plugins for enhanced code completion and chat capabilities.
---
@@ -15,17 +15,51 @@ This repo consists Refact WebUI for fine-tuning and self-hosting of code models,
[![Visual Studio](https://img.shields.io/visual-studio-marketplace/d/smallcloud.codify?label=VS%20Code)](https://marketplace.visualstudio.com/items?itemName=smallcloud.codify)
[![JetBrains](https://img.shields.io/jetbrains/plugin/d/com.smallcloud.codify?label=JetBrains)](https://plugins.jetbrains.com/plugin/20647-codify)
-- [x] Fine-tuning of open-source code models
-- [x] Self-hosting of open-source code models
-- [x] Download and upload Lloras
-- [x] Use models for code completion and chat inside Refact plugins
-- [x] Model sharding
-- [x] Host several small models on one GPU
-- [x] Use OpenAI and Anthropic keys to connect GPT-models for chat
+## Key Features 🌟
-![self-hosting-refact](https://github.com/smallcloudai/refact/assets/5008686/18e48b42-b638-4606-bde0-cadd47fd26e7)
+- ✅ Fine-tuning of open-source code models
+- ✅ Self-hosting of open-source code models
+- ✅ Download and upload Lloras
+- ✅ Use models for code completion and chat inside Refact plugins
+- ✅ Model sharding
+- ✅ Host several small models on one GPU
+- ✅ Use OpenAI and Anthropic keys to connect GPT models for chat
+
+---
+
+# Demo Video 🎥
+
+https://github.com/user-attachments/assets/e69ee31d-6308-4050-9ee9-9de1b2af040e
+
+---
+
+# Table of Contents 📚
+
+- [Custom Installation](#custom-installation-%EF%B8%8F)
+ - [Running Refact Self-Hosted in a Docker Container](#running-refact-self-hosted-in-a-docker-container-)
+- [Getting Started with Plugins](#getting-started-with-plugins-)
+- [Progess/Future Plans](#progress-and-future-plans-)
+- [Supported Models](#supported-models-)
+- [Architecture](#architecture-%EF%B8%8F)
+- [Contributing](#contributing-)
+- [Follow Us/FAQ](#follow-us-and-faq-)
+- [License](#license-)
+
+
+# Custom Installation ⚙️
+
+You can also install refact repo without docker:
+```shell
+pip install .
+```
+If you have a GPU with CUDA capability >= 8.0, you can also install it with flash-attention v2 support:
+```shell
+FLASH_ATTENTION_FORCE_BUILD=TRUE MAX_JOBS=4 INSTALL_OPTIONAL=TRUE pip install .
+```
+
+
+## Running Refact Self-Hosted in a Docker Container 🐳
-### Running Refact Self-Hosted in a Docker Container
The easiest way to run the self-hosted server is a pre-build Docker image.
@@ -82,11 +116,12 @@ docker volume rm VVV
```
-See [CONTRIBUTING.md](CONTRIBUTING.md) for installation without a docker container.
+See [CONTRIBUTING.md](CONTRIBUTING.md) for installation without a docker container.
+---
-### Setting Up Plugins
+# Getting Started with Plugins 🔌
Download Refact for [VS Code](https://marketplace.visualstudio.com/items?itemName=smallcloud.codify) or [JetBrains](https://plugins.jetbrains.com/plugin/20647-refact-ai).
@@ -100,50 +135,63 @@ Settings > Tools > Refact.ai > Advanced > Inference URL
Extensions > Refact.ai Assistant > Settings > Infurl
+---
+
+# Progress and Future Plans 🚧
-## Supported models
+*Details about progress and future plans will be added here.*
+
+---
+
+## Supported Models 📊
| Model | Completion | Chat | Fine-tuning | [Deprecated](## "Will be removed in next versions") |
|---------------------------------------------------------------------------------------------------|------------|------|-------------|-----------------------------------------------------|
-| [Refact/1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim) | + | | + | |
-| [starcoder2/3b/base](https://huggingface.co/bigcode/starcoder2-3b) | + | | + | |
-| [starcoder2/7b/base](https://huggingface.co/bigcode/starcoder2-7b) | + | | + | |
-| [starcoder2/15b/base](https://huggingface.co/bigcode/starcoder2-15b) | + | | + | |
-| [deepseek-coder/1.3b/base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | + | | + | |
-| [deepseek-coder/5.7b/mqa-base](https://huggingface.co/deepseek-ai/deepseek-coder-5.7bmqa-base) | + | | + | |
-| [magicoder/6.7b](https://huggingface.co/TheBloke/Magicoder-S-DS-6.7B-GPTQ) | | + | | |
-| [mistral/7b/instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) | | + | | |
-| [mixtral/8x7b/instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | | + | | |
-| [deepseek-coder/6.7b/instruct](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GPTQ) | | + | | |
-| [deepseek-coder/33b/instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) | | + | | |
-| [stable/3b/code](https://huggingface.co/stabilityai/stable-code-3b) | + | | | |
-| [llama3/8b/instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | | + | | |
-
-## Usage
-
-Refact is free to use for individuals and small teams under BSD-3-Clause license. If you wish to use Refact for Enterprise, please [contact us](https://refact.ai/contact/).
-
-## Custom installation
+| [Refact/1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim) | ✅ | ❌ | ✅ | |
+| [starcoder2/3b/base](https://huggingface.co/bigcode/starcoder2-3b) | ✅ | ❌ | ✅ | |
+| [starcoder2/7b/base](https://huggingface.co/bigcode/starcoder2-7b) | ✅ | ❌ | ✅ | |
+| [starcoder2/15b/base](https://huggingface.co/bigcode/starcoder2-15b) | ✅ | ❌ | ✅ | |
+| [deepseek-coder/1.3b/base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | ✅ | ❌ | ✅ | |
+| [deepseek-coder/5.7b/mqa-base](https://huggingface.co/deepseek-ai/deepseek-coder-5.7bmqa-base) | ✅ | ❌ | ✅ | |
+| [magicoder/6.7b](https://huggingface.co/TheBloke/Magicoder-S-DS-6.7B-GPTQ) | ❌ | ✅ | ❌ | |
+| [mistral/7b/instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) | ❌ | ✅ | ❌ | |
+| [mixtral/8x7b/instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | ❌ | ✅ | ❌ | |
+| [deepseek-coder/6.7b/instruct](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GPTQ) | ❌ | ✅ | ❌ | |
+| [deepseek-coder/33b/instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) | ❌ | ✅ | ❌ | |
+| [stable/3b/code](https://huggingface.co/stabilityai/stable-code-3b) | ✅ | ❌ | ❌ | |
+| [llama3/8b/instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | ❌ | ✅ | ❌ | |
-You can also install refact repo without docker:
-```shell
-pip install .
-```
-If you have a GPU with CUDA capability >= 8.0, you can also install it with flash-attention v2 support:
-```shell
-FLASH_ATTENTION_FORCE_BUILD=TRUE MAX_JOBS=4 INSTALL_OPTIONAL=TRUE pip install .
-```
+---
-## FAQ
+# Architecture 🏗️
-Q: Can I run a model on CPU?
+*Details about the architecture will be added here.*
-A: it doesn't run on CPU yet, but it's certainly possible to implement this.
+---
-## Community & Support
+# Contributing 🤝
-- Contributing [CONTRIBUTING.md](CONTRIBUTING.md)
-- [GitHub issues](https://github.com/smallcloudai/refact/issues) for bugs and errors
-- [Community forum](https://github.com/smallcloudai/refact/discussions) for community support and discussions
+If you wish to contribute to this project, feel free to explore our [current issues](https://github.com/smallcloudai/refact/issues) or open new issues related to (bugs/features) using our [CONTRIBUTING.md](CONTRIBUTING.md).
+
+
+---
+
+## Follow Us and FAQ ❓
+
+**Q: Can I run a model on CPU?**
+
+A: Currently, it doesn't run on CPU, but it's certainly possible to implement this.
+
+- [Contributing](CONTRIBUTING.md)
+- [Refact Docs](https://docs.refact.ai/guides/version-specific/self-hosted/)
+- [GitHub Issues](https://github.com/smallcloudai/refact/issues) for bugs and errors
+- [Community Forum](https://github.com/smallcloudai/refact/discussions) for community support and discussions
- [Discord](https://www.smallcloud.ai/discord) for chatting with community members
- [Twitter](https://twitter.com/refact_ai) for product news and updates
+
+---
+
+## License 📜
+
+Refact is free to use for individuals and small teams under the BSD-3-Clause license. If you wish to use Refact for Enterprise, please [contact us](https://refact.ai/contact/).
+
From 283d3d2a3f125fe3026f48125ce8e8d827a73126 Mon Sep 17 00:00:00 2001
From: avie66
Date: Fri, 1 Nov 2024 17:22:35 +0530
Subject: [PATCH 2/4] Update README.md
Updated readme
---
README.md | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/README.md b/README.md
index fbdb0a05..f899dc95 100644
--- a/README.md
+++ b/README.md
@@ -38,9 +38,7 @@ https://github.com/user-attachments/assets/e69ee31d-6308-4050-9ee9-9de1b2af040e
- [Custom Installation](#custom-installation-%EF%B8%8F)
- [Running Refact Self-Hosted in a Docker Container](#running-refact-self-hosted-in-a-docker-container-)
- [Getting Started with Plugins](#getting-started-with-plugins-)
-- [Progess/Future Plans](#progress-and-future-plans-)
- [Supported Models](#supported-models-)
-- [Architecture](#architecture-%EF%B8%8F)
- [Contributing](#contributing-)
- [Follow Us/FAQ](#follow-us-and-faq-)
- [License](#license-)
@@ -137,12 +135,6 @@ Extensions > Refact.ai Assistant > Settings > Infurl
---
-# Progress and Future Plans 🚧
-
-*Details about progress and future plans will be added here.*
-
----
-
## Supported Models 📊
| Model | Completion | Chat | Fine-tuning | [Deprecated](## "Will be removed in next versions") |
@@ -163,11 +155,6 @@ Extensions > Refact.ai Assistant > Settings > Infurl
---
-# Architecture 🏗️
-
-*Details about the architecture will be added here.*
-
----
# Contributing 🤝
From 0e85d404db46ce845de102c2f17f4f58a4418304 Mon Sep 17 00:00:00 2001
From: Awantika
Date: Wed, 11 Dec 2024 19:27:44 +0530
Subject: [PATCH 3/4] Update README.md
---
README.md | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/README.md b/README.md
index f899dc95..b15c8e09 100644
--- a/README.md
+++ b/README.md
@@ -29,7 +29,7 @@ This repository contains the Refact WebUI, designed for fine-tuning and self-hos
# Demo Video 🎥
-https://github.com/user-attachments/assets/e69ee31d-6308-4050-9ee9-9de1b2af040e
+This would be added soon
---
@@ -142,16 +142,16 @@ Extensions > Refact.ai Assistant > Settings > Infurl
| [Refact/1.6B](https://huggingface.co/smallcloudai/Refact-1_6B-fim) | ✅ | ❌ | ✅ | |
| [starcoder2/3b/base](https://huggingface.co/bigcode/starcoder2-3b) | ✅ | ❌ | ✅ | |
| [starcoder2/7b/base](https://huggingface.co/bigcode/starcoder2-7b) | ✅ | ❌ | ✅ | |
-| [starcoder2/15b/base](https://huggingface.co/bigcode/starcoder2-15b) | ✅ | ❌ | ✅ | |
-| [deepseek-coder/1.3b/base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | ✅ | ❌ | ✅ | |
-| [deepseek-coder/5.7b/mqa-base](https://huggingface.co/deepseek-ai/deepseek-coder-5.7bmqa-base) | ✅ | ❌ | ✅ | |
-| [magicoder/6.7b](https://huggingface.co/TheBloke/Magicoder-S-DS-6.7B-GPTQ) | ❌ | ✅ | ❌ | |
-| [mistral/7b/instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) | ❌ | ✅ | ❌ | |
-| [mixtral/8x7b/instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | ❌ | ✅ | ❌ | |
-| [deepseek-coder/6.7b/instruct](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GPTQ) | ❌ | ✅ | ❌ | |
-| [deepseek-coder/33b/instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) | ❌ | ✅ | ❌ | |
-| [stable/3b/code](https://huggingface.co/stabilityai/stable-code-3b) | ✅ | ❌ | ❌ | |
-| [llama3/8b/instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | ❌ | ✅ | ❌ | |
+| [starcoder2/15b/base](https://huggingface.co/bigcode/starcoder2-15b) | ✅ | ❌ | ✅ | ✅ |
+| [deepseek-coder/1.3b/base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | ✅ | ❌ | ✅ | ✅ |
+| [deepseek-coder/5.7b/mqa-base](https://huggingface.co/deepseek-ai/deepseek-coder-5.7bmqa-base) | ✅ | ❌ | ✅ | ✅ |
+| [magicoder/6.7b](https://huggingface.co/TheBloke/Magicoder-S-DS-6.7B-GPTQ) | ❌ | ✅ | ❌ | ✅ |
+| [mistral/7b/instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) | ❌ | ✅ | ❌ | ✅ |
+| [mixtral/8x7b/instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | ❌ | ✅ | ❌ | |
+| [deepseek-coder/6.7b/instruct](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GPTQ) | ❌ | ✅ | ❌ | |
+| [deepseek-coder/33b/instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) | ❌ | ✅ | ❌ | |
+| [stable/3b/code](https://huggingface.co/stabilityai/stable-code-3b) | ✅ | ❌ | ❌ | |
+| [llama3/8b/instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | ❌ | ✅ | ❌ | |
---
From 61923cbafc14a47f97aae70d8d9d18280a47eafe Mon Sep 17 00:00:00 2001
From: Awantika
Date: Wed, 11 Dec 2024 19:29:16 +0530
Subject: [PATCH 4/4] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index b15c8e09..74da4196 100644
--- a/README.md
+++ b/README.md
@@ -130,7 +130,7 @@ Go to plugin settings and set up a custom inference URL `http://127.0.0.1:8008`
Settings > Tools > Refact.ai > Advanced > Inference URL
VSCode
-Extensions > Refact.ai Assistant > Settings > Infurl
+Extensions > Refact.ai Assistant > Settings > Address URL
---