Skip to content

Commit

Permalink
Merge pull request #10 from 3Alan/docker
Browse files Browse the repository at this point in the history
Dockerize
  • Loading branch information
3Alan authored May 1, 2023
2 parents c077ec4 + 8fcc18b commit 8aae06b
Show file tree
Hide file tree
Showing 58 changed files with 2,153 additions and 1,936 deletions.
5 changes: 3 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# optional for backend code
# OPENAI_API_KEY=""
# will replace https://api.openai.com/v1
# OPENAI_PROXY=""

# reuqired backend url for production frontend code
VITE_SERVICES_URL=""
# optional backend url for production frontend code
# VITE_SERVICES_URL="http://127.0.0.1:8080"
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ assignees: ''
A clear and concise description of what the bug is.

**Provide logs**
Please provide the log located at `api-src/app.log` and the client error (browser console) so that I can better understand the issue.
Please provide the log located at `logs/app.log` and the client error (browser console) so that I can better understand the issue.

**To Reproduce**
Steps to reproduce the behavior.
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,5 @@ instance/
build
*.spec
temp

data
58 changes: 49 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Chat-Markdown is an open-source project that allows you to chat with your markdown files.

![Stack](https://skillicons.dev/icons?i=vite,react,ts,tailwind,py)
![Stack](https://skillicons.dev/icons?i=vite,react,ts,tailwind,flask)

## Demo

Expand All @@ -22,6 +22,7 @@ Deploy on Vercel and Railway
- 📤 Upload files
- 💾 Data saved locally
- 💰 Token usage tracker
- 🐳 Dockerize

## Future Development

Expand All @@ -30,23 +31,56 @@ I plan to add the following features in the future:
- [ ] Support for more file formats: pdf, txt
- [ ] Download doc from the internet
- [ ] Markdown-formatted message
- [ ] Dockerize
- [ ] i18n
- [ ] Desktop application

If you find this project helpful, please consider giving it a star 🌟

## How to run locally?
## Environment Variables

| Name | Description | Optional |
| -------------------- | -------------------------------------- | -------- |
| OPENAI_API_KEY | sk-xxx ||
| OPENAI_PROXY | will replace https://api.openai.com/v1 ||
| VITE_SERVICES_URL | backend url for frontend code ||
| VITE_DISABLED_UPLOAD | DISABLED_UPLOAD ||

## Q&A

### How to run locally?

> **Warning**
>
> Please check if you can access OpenAI in your region, you can refer to the [issue](https://github.com/3Alan/chat-markdown/issues/3#issuecomment-1511470063) for more information.
### Create .env
1. Create .env(Optional)

Create a `.env` file and copy the contents of `.env.example` to modify it.

2. Run App

Create `.env` file and fill in environment variables, see `.env.example` for reference
```bash
docker-compose up -d
```

Please add `--build` to rebuild the image after each code update.

```bash
docker-compose up -d --build
```

### Frontend
now you can access the app at `http://localhost:8081`

### Local Development

<details>
<summary>Detail</summary>

#### Create .env(Optional)

Create a `.env` file and copy the contents of `.env.example` to modify it.

#### Run Frontend

1. Install dependencies

Expand All @@ -60,14 +94,14 @@ yarn
yarn dev
```

### Backend
#### Run Backend

you need a python environment

1. Create virtual environment

```
cd api-src
cd server
python -m venv .venv
```

Expand All @@ -94,5 +128,11 @@ pip install -r requirements.txt
4. Run Services

```
flask run --reload
flask run --reload --port=8080
```

</detail>

## Buy me a coffee

<img height="300" src="https://raw.githubusercontent.com/3Alan/images/master/img/%E5%BE%AE%E4%BF%A1%E6%94%AF%E4%BB%98%E5%AE%9D%E4%BA%8C%E5%90%88%E4%B8%80%E6%94%B6%E6%AC%BE%E7%A0%81.jpg" />
1,843 changes: 0 additions & 1,843 deletions api-src/userData/html/clean-code-javascript.html

This file was deleted.

1 change: 0 additions & 1 deletion api-src/userData/index/TypeScript入门学习总结.json

This file was deleted.

1 change: 0 additions & 1 deletion api-src/userData/index/clean-code-javascript.json

This file was deleted.

1 change: 0 additions & 1 deletion api-src/userData/index/openai-chatgpt-prompts.json

This file was deleted.

1 change: 0 additions & 1 deletion api-src/userData/index/test.json

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import classNames from 'classnames';
import { FC, PropsWithChildren, ReactNode, useEffect, useState } from 'react';
import { MessageItem } from './constants';
import Loading from './Loading';
import { isString } from 'lodash';
import { isEmpty, isString } from 'lodash';

interface MessageProps extends PropsWithChildren {
isQuestion?: boolean;
Expand Down Expand Up @@ -55,7 +55,7 @@ const Message: FC<MessageProps> = ({

{(item?.sources || item?.cost) && (
<div className="flex px-3 pt-2 pb-2 border-t-gray-200 border-t items-center justify-between">
{item?.sources && (
{!isEmpty(item?.sources) && (
<HighlightOutlined className=" text-gray-400" onClick={() => onReplyClick?.(item)} />
)}

Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ const ChatWindow: FC<ChatWindowProps> = ({
<div className="p-4 pb-0 border-t border-t-gray-200 border-solid border-x-0 border-b-0">
<div className="relative">
<Input.TextArea
disabled={loading}
size="large"
placeholder="Input your question"
value={query}
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
1 change: 1 addition & 0 deletions src/styles/globals.css → client/src/styles/globals.css
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ body {

.markdown-body .hl-source {
background-color: #eff6ff;
border-radius: 6px;
}

.markdown-body {
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
3 changes: 1 addition & 2 deletions src/utils/request.ts → client/src/utils/request.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
import { message } from 'antd';
import axios from 'axios';
import { isDev } from './isDev';

export const baseURL = isDev ? 'http://127.0.0.1:5000' : import.meta.env.VITE_SERVICES_URL;
export const baseURL = import.meta.env.VITE_SERVICES_URL || 'http://127.0.0.1:8080';

const request = axios.create({
baseURL
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
18 changes: 18 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
version: '3'
services:
client:
build:
context: .
dockerfile: docker/Dockerfile.client
ports:
- "8081:80"
server:
build:
context: .
dockerfile: docker/Dockerfile.server
ports:
- "8080:8080"
volumes:
- ./data:/server/static
- ./logs:/server/logs

21 changes: 21 additions & 0 deletions docker/Dockerfile.client
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
FROM node:16-alpine as client

WORKDIR /client

COPY ./client .

RUN yarn install

RUN yarn build

RUN rm -rf node_modules
RUN rm -rf src

FROM nginx:1.23.4-alpine

COPY --from=client /client/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]
14 changes: 14 additions & 0 deletions docker/Dockerfile.server
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
FROM python:3.9-slim-buster

WORKDIR /server

COPY ./server/requirements.txt requirements.txt

RUN pip3 install -r requirements.txt

COPY ./server .
COPY .env .

EXPOSE 8080

CMD ["flask", "run", "--host", "0.0.0.0", "--port", "8080"]
19 changes: 19 additions & 0 deletions nginx.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
server {
listen 80;

location ~ /(api|static) {
proxy_pass http://server:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

location / {
alias /usr/share/nginx/html/;
try_files $uri /index.html;
if ($request_filename ~ .*\.(htm|html)$) {
add_header Cache-Control 'no-store, no-cache, must-revalidate';
}
}
}
File renamed without changes.
66 changes: 32 additions & 34 deletions api-src/app.py → server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,23 +22,25 @@
openai.api_base = openai_proxy


user_data_dir = "userData"
staticPath = "static"

if not os.path.exists(f"{user_data_dir}/html"):
os.makedirs(f"{user_data_dir}/html")
if not os.path.exists(f"{user_data_dir}/index"):
os.makedirs(f"{user_data_dir}/index")
if not os.path.exists(f"{user_data_dir}/temp"):
os.makedirs(f"{user_data_dir}/temp")
if not os.path.exists(f"{staticPath}/html"):
os.makedirs(f"{staticPath}/html")
if not os.path.exists(f"{staticPath}/index"):
os.makedirs(f"{staticPath}/index")
if not os.path.exists(f"{staticPath}/temp"):
os.makedirs(f"{staticPath}/temp")
if not os.path.exists(f"logs"):
os.makedirs(f"logs")


app = Flask(__name__, static_folder=f"{user_data_dir}")
app = Flask(__name__, static_folder=f"{staticPath}")


CORS(app)

logger = logging.getLogger(__name__)
file_handler = logging.FileHandler("app.log", encoding="utf-8")
file_handler = logging.FileHandler("logs/app.log", encoding="utf-8")
formatter = logging.Formatter(
"%(asctime)s %(levelname)s: %(message)s", datefmt="%Y-%m-%d %H:%M:%S"
)
Expand Down Expand Up @@ -72,7 +74,7 @@ def summarize_index():

UnstructuredReader = download_loader("UnstructuredReader")
loader = UnstructuredReader()
documents = loader.load_data(file=Path(f"./{user_data_dir}/html/{index_name}.html"))
documents = loader.load_data(file=Path(f"./{staticPath}/html/{index_name}.html"))
index = GPTListIndex.from_documents(documents)

# predictor cost
Expand All @@ -82,31 +84,29 @@ def summarize_index():
llm_predictor=llm_predictor, embed_model=embed_model
)

# TODO: Format everything as markdown
prompt = f"""
Summarize this document and provide three questions related to the summary. Try to use your own words when possible. Keep your answer under 5 sentences.
Use the following format:
<summary text>
Questions you may want to ask 🤔
1. <question text>
2. <question text>
3. <question text>
"""

index.query(
(
"Summarize this document and provide three questions related to the summary. Try to use your own words when possible. Keep your answer under 5 sentences. \n"
"The three questions use the following format(add two line breaks at the beginning of the template):"
"Template:"
"Questions you may want to ask 🤔 \n"
"1. <question_1> \n"
"2. <question_2> \n"
"3. <question_3> \n"
),
prompt,
response_mode="tree_summarize",
service_context=service_context,
optimizer=SentenceEmbeddingOptimizer(percentile_cutoff=0.8),
)

res = index.query(
(
"Summarize this document and provide three questions related to the summary. Try to use your own words when possible. Keep your answer under 5 sentences. \n"
"The three questions use the following format(add two line breaks at the beginning of the template):"
"Template:"
"Questions you may want to ask 🤔 \n"
"1. <question_1> \n"
"2. <question_2> \n"
"3. <question_3> \n"
),
prompt,
streaming=True,
response_mode="tree_summarize",
optimizer=SentenceEmbeddingOptimizer(percentile_cutoff=0.8),
Expand Down Expand Up @@ -134,9 +134,7 @@ def query_index():
if open_ai_key:
os.environ["OPENAI_API_KEY"] = open_ai_key

index = GPTSimpleVectorIndex.load_from_disk(
f"{user_data_dir}/index/{index_name}.json"
)
index = GPTSimpleVectorIndex.load_from_disk(f"{staticPath}/index/{index_name}.json")

# predictor cost
llm_predictor = MockLLMPredictor(max_tokens=256)
Expand Down Expand Up @@ -174,7 +172,7 @@ def upload_file():
uploaded_file = request.files["file"]
filename = uploaded_file.filename
print(os.getcwd(), os.path.abspath(__file__))
filepath = os.path.join(f"{user_data_dir}/temp", os.path.basename(filename))
filepath = os.path.join(f"{staticPath}/temp", os.path.basename(filename))
uploaded_file.save(filepath)

token_usage = create_index(filepath, os.path.splitext(filename)[0])
Expand Down Expand Up @@ -203,14 +201,14 @@ def upload_file():

@app.route("/api/index-list", methods=["GET"])
def get_index_files():
dir = f"{user_data_dir}/index"
dir = f"{staticPath}/index"
files = os.listdir(dir)
return files


@app.route("/api/html-list", methods=["GET"])
def get_html_files():
dir = f"{user_data_dir}/html"
dir = f"{staticPath}/html"
files = os.listdir(dir)
return [
{"path": f"/{dir}/{file}", "name": os.path.splitext(file)[0]} for file in files
Expand Down
Loading

0 comments on commit 8aae06b

Please sign in to comment.