Skip to content

Running LLM As Chatbot in your cloud

Andrey Cheptsov edited this page Jul 5, 2023 · 9 revisions

This repository is a fork of deep-diver/LLM-As-Chatbot. The only difference from the original repo is the .dstack.yml file, which allows you to run LLM-As-Chatbot in your cloud with a single dstack run command, automatically provisioning cloud resources for you.

1. Clone the repository

git clone https://github.com/dstackai/LLM-As-Chatbot.git
cd LLM-As-Chatbot

2. Install and set up dstack

pip install "dstack[aws,gcp,azure,lambda]" -U
dstack start

Once the server is up, make sure to log in and create a project with your cloud credentials (AWS, GCP, or Azure). After creating the project, copy its dstack config command and run it locally to configure the CLI to use this project.

3. Create a profile

Create a .dstack/profiles.yml file that points to the created project and describes the resources.

Example:

profiles:
  - name: gcp
    project: gcp
    resources:
      memory: 48GB
      gpu:
        memory: 24GB
    default: true

4. Initialization

Run the dstack init command:

dstack init

5. Run the app in your cloud

Use the dstack run command:

dstack run .

This command will build the environment and run LLM-As-Chatbot in your cloud.

dstack will automatically forward the port to your local machine, providing secure and convenient access.

dstack-llm-as-chatbot-gradio-app

More information

For more details on how dstack works, check out its documentation.