Skip to content

Commit

Permalink
Renamed lorax-inference to lorax
Browse files Browse the repository at this point in the history
  • Loading branch information
tgaddair committed Nov 16, 2023
1 parent a9426bb commit 56bb243
Show file tree
Hide file tree
Showing 13 changed files with 89 additions and 258 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: "\U0001F41B Bug Report"
description: Submit a bug report to help us improve lorax-inference
description: Submit a bug report to help us improve LoRAX
body:
- type: textarea
id: system-info
Expand All @@ -16,7 +16,7 @@ body:
Deployment specificities (Kubernetes, EKS, AKS, any particular deployments):
The current version being used:
placeholder: lorax-inference version, platform, python version, ...
placeholder: lorax version, platform, python version, ...
validations:
required: true

Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/feature-request.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: "\U0001F680 Feature request"
description: Submit a proposal/request for a new lorax-inference feature
description: Submit a proposal/request for a new LoRAX feature
labels: [ "feature" ]
body:
- type: textarea
Expand Down Expand Up @@ -28,4 +28,4 @@ body:
attributes:
label: Your contribution
description: |
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/huggingface/lorax-inference/blob/main/CONTRIBUTING.md)
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD [readme](https://github.com/predibase/lorax/blob/main/CONTRIBUTING.md)
2 changes: 1 addition & 1 deletion .github/workflows/load_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ jobs:
- name: Start starcoder
run: |
docker run --name tgi-starcoder --rm --gpus all -p 3000:80 -v ${{ env.DOCKER_VOLUME }}:/data -e HUGGING_FACE_HUB_TOKEN=${{ secrets.HUGGING_FACE_HUB_TOKEN }} --pull always -d ghcr.io/huggingface/lorax-inference:latest --model-id bigcode/starcoder --num-shard 2 --max-batch-total-tokens 32768
docker run --name tgi-starcoder --rm --gpus all -p 3000:80 -v ${{ env.DOCKER_VOLUME }}:/data -e HUGGING_FACE_HUB_TOKEN=${{ secrets.HUGGING_FACE_HUB_TOKEN }} --pull always -d ghcr.io/predibase/lorax:latest --model-id bigcode/starcoder --num-shard 2 --max-batch-total-tokens 32768
sleep 10
wget --timeout 10 --retry-on-http-error --waitretry=1 --tries=240 http://localhost:3000/health
Expand Down
Loading

0 comments on commit 56bb243

Please sign in to comment.