Skip to content

Commit

Permalink
Dockerize and Add Release-Push Workflow (#688)
Browse files Browse the repository at this point in the history

---------

Co-authored-by: Andrew Lapp <[email protected]>
  • Loading branch information
lapp0 and Andrew Lapp authored Feb 21, 2024
1 parent 2dbc279 commit 26d9f0c
Show file tree
Hide file tree
Showing 7 changed files with 81 additions and 11 deletions.
37 changes: 37 additions & 0 deletions .github/workflows/release_docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Release Docker

on:
release:
types:
- created
workflow_dispatch:
inputs:
release_tag:
description: 'Release Tag (for manual dispatch)'
required: false
default: 'latest'
jobs:
release-job:
name: Build and publish on Docker Hub
runs-on: ubuntu-latest
environment: release
steps:

- name: Checkout
uses: actions/checkout@v4
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
push: true
tags: |
outlinesdev/outlines:latest
outlinesdev/outlines:${{ github.event.release.tag_name }}
build-args: |
BUILDKIT_CONTEXT_KEEP_GIT_DIR=true
- name: Clean docker cache
run: docker system prune --all --force
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
name: Release
name: Release PyPi

on:
release:
types:
- created

jobs:
release-job:
name: Build and publish on PyPi
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
Expand Down
17 changes: 17 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
FROM python:3.10

WORKDIR /outlines

RUN pip install --upgrade pip

# Copy necessary build components
COPY pyproject.toml .
COPY outlines ./outlines

# Install outlines and outlines[serve]
# .git required by setuptools-scm
RUN --mount=source=.git,target=.git,type=bind \
pip install --no-cache-dir .[serve]

# https://outlines-dev.github.io/outlines/reference/vllm/
ENTRYPOINT python3 -m outlines.serve.serve
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ First time here? Go to our [setup guide](https://outlines-dev.github.io/outlines
- [x] 💾 Caching of generations
- [x] 🗂️ Batch inference
- [x] 🎲 Sample with the greedy, multinomial and beam search algorithms (and more to come!)
- [x] 🚀 [Serve with vLLM](https://outlines-dev.github.io/outlines/reference/vllm)
- [x] 🚀 [Serve with vLLM](https://outlines-dev.github.io/outlines/reference/vllm), with official Docker image, [`outlinesdev/outlines`](https://hub.docker.com/r/outlinesdev/outlines)!


Outlines 〰 has new releases and features coming every week. Make sure to ⭐ star and 👀 watch this repository, follow [@dottxtai][twitter] to stay up to date!
Expand Down
9 changes: 9 additions & 0 deletions docs/community/contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,15 @@ pip install -e .[test]
pre-commit install
```

#### Developing Serve Endpoint Via Docker

```bash
docker build -t outlines-serve .
docker run -p 8000:8000 outlines-serve --model="mistralai/Mistral-7B-Instruct-v0.2"
```

This builds `outlines-serve` and runs on `localhost:8000` with the model `Mistral-7B-Instruct-v0.2`

### Before pushing your code

Run the tests:
Expand Down
11 changes: 7 additions & 4 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,15 +239,18 @@ Outlines can be deployed as a LLM service using [vLLM][vllm]{:target="_blank"} a
First start the server:

```python
python -m outlines.serve.serve
python -m outlines.serve.serve --model="mistralai/Mistral-7B-Instruct-v0.2"
```

This will by default start a server at `http://127.0.0.1:8000` (check what the console says, though) with the OPT-125M model. If you want to specify another model:
Or you can start the server with Outlines' official Docker image:

```python
python -m outlines.serve.serve --model="mistralai/Mistral-7B-Instruct-v0.2"
```bash
docker run -p 8000:8000 outlinesdev/outlines --model="mistralai/Mistral-7B-Instruct-v0.2"
```

This will by default start a server at `http://127.0.0.1:8000` (check what the console says, though). Without the `--model` argument set, the OPT-125M model is used.


You can then query the model in shell by passing a prompt and a [JSON Schema][jsonschema]{:target="_blank"} specification for the structure of the output:

```bash
Expand Down
12 changes: 9 additions & 3 deletions docs/reference/vllm.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,21 @@ pip install outlines[serve]
You can then start the server with:

```bash
python -m outlines.serve.serve
python -m outlines.serve.serve --model="mistralai/Mistral-7B-Instruct-v0.2"
```

This will by default start a server at `http://127.0.0.1:8000` (check what the console says, though) with the OPT-125M model. If you want to specify another model (e.g. Mistral-7B-Instruct-v0.2), you can do so with the `--model` parameter:
This will by default start a server at `http://127.0.0.1:8000` (check what the console says, though). Without the `--model` argument set, the OPT-125M model is used. The `--model` argument allows you to specify any model of your choosing.

### Alternative Method: Via Docker

You can install and run the server with Outlines' official Docker image using the command

```bash
python -m outlines.serve.serve --model="mistralai/Mistral-7B-Instruct-v0.2"
docker run -p 8000:8000 outlinesdev/outlines --model="mistralai/Mistral-7B-Instruct-v0.2"
```

## Querying Endpoint

You can then query the model in shell by passing a prompt and either

1. a [JSON Schema][jsonschema]{:target="_blank"} specification or
Expand Down

0 comments on commit 26d9f0c

Please sign in to comment.