Skip to content

1. Harbor User Guide

av edited this page Jan 27, 2025 · 16 revisions

Harbor splash

Firstly, welcome to Harbor! This guide will help you get started with Harbor CLI and manage the main services. It assumes that you already have the Harbor CLI installed and running. See the Harbor CLI Installation guide for more information.

Getting Started

Start Harbor with default services:

harbor up

This will start Ollama and Open WebUI by default. These two are the pre-configured as the default Frontend and Backend respectively. Being a "default" service means that they will be started automatically when you run harbor up (and many other commands) without any additional arguments.

Additionally, these two services are the ones that are tested the most and targeted to be integrated with the rest of the Harbor in the most seamless way.

Tip

You can configure default services using harbor defaults

Here are some sample logs from a successful start:

user@os:~$ ▼ h u
[+] Running 3/3
 ✔ Network harbor_harbor-network  Created                                       0.1s
 ✔ Container harbor.ollama        Healthy                                       0.7s
 ✔ Container harbor.webui         Healthy                                       5.7s

h and u are both aliases. First is for the CLI itself (see harbor link --short), the second is for the up command. You can typically see available aliases in the CLI Help output as well as on the pages of CLI Reference.

When the harbor is running, you can access its default frontend service in the web browser:

harbor open

harbor open uses harbor url under the hood. You can also specify a service name to open a specific service:

# See the URL
harbor url <service handle>
harbor open <service handle>

When you are running harbor up / harbor open / harbor url commands without specifying a service handle, it will default to the service that is configured as the default Frontend. You can adjust that using harbor config if needed.

Now, let's add some more services to your Harbor setup. To start a new service, that is not a default one, you can use the harbor up command, but also add the service handle as an argument:

harbor up searxng

Tip

Every service in Harbor will have its own dedicated "handle" that can be used with most compatible commands. You'll find all the handles in the Services section. For example, Open WebUI has the handle webui. You'll see the same handles used for configuration, logs, files and other service-specific operations.

Default Services

Harbor allows you to specify the set of services that will be started by default. You can manage the default services using the harbor defaults command. It includes frontends, backends and satellite services that may be included in your Harbor setup.

# Show the list of default services
harbor defaults

# Add a new default service
harbor defaults add searxng

# Swap a default backend
harbor defaults rm ollama
harbor defaults add vllm

Running LLMs

When running Harbor, most of the value comes from running everything at once with a few simple commands. This includes running LLMs, which are the main focus of Harbor. In order to run them, you'll want to run one of the supported backend services. You will find detailed guides on configuration and usage of the services in their own respective documentation pages, but here are some quick examples for the most common backends:

Ollama

ollama is the default LLM backend due to its convenience for the end-user, it's main benefit is that it determines system resources and features very well automatically.

Harbor not only runs ollama as a service, but also gives you full access to its CLI via dockerized harbor ollama command.

List available models:

harbor ollama list

Pull a new model:

harbor ollama pull <model_name>

Tip

You can run any ollama commands via harbor ollama ...

llama.cpp

Llama.cpp own cache:

harbor llamacpp model <full hugging face gguf URL>

# Example
harbor llamacpp model https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf/blob/main/dolphin-2.9.4-llama3.1-8b-Q4_K_S.gguf

The model will be downloaded and cached on the next harbor up with llamacpp service.

Tip

You can quickly jump to HuggingFace model lookup with harbor hf find command:

harbor hf find dolphin gguf

Use shared HuggingFace hub cache:

# Download
harbor hf download <user/repo> <file.gguf>

# Locate the file
harbor find file.gguf

# Set the path to the model
harbor config set llamacpp.model.specifier -m /app/models/<path to file.gguf>

vLLM

Set model:

harbor vllm model <user/repo>

# Example
harbor vllm model google/gemma-2-2b-it

Tip

You can set HuggingFace Hub token to access gated/private models:

harbor hf token <your HF token>

This token will be pre-configured for services that might need it for such purpose.

Other backends

Please refer to the Backends section in the Services documentation for even more backends and their configuration.

Configuring Services

There are three layers of configuration in Harbor:

Service CLIs

For the most common/frequent configs, Harbor provides CLI aliases, one such example is configuration of service models:

# Setting service model via CLI alias
harbor llamacpp model https://huggingface.co/lm-kit/gemma-2-2b-gguf/blob/main/gemma-2-2B-Q8_0.gguf
harbor tgi model google/gemma-2-2b-it
harbor vllm model google/gemma-2-2b-it
harbor aphrodite model google/gemma-2-2b-it
harbor tabbyapi model google/gemma-2-2b-it-exl2
harbor mistralrs model google/gemma-2-2b-it
harbor opint model google/gemma-2-2b-it
harbor cmdh model google/gemma-2-2b-it
harbor fabric model google/gemma-2-2b-it
harbor parler model parler-tts/parler-tts-large-v1
harbor airllm model meta-llama/Meta-Llama-3.1-8B-Instruct
harbor txtai rag model llama3.1:8b-instruct-q6_K
harbor aider model llama3.1:8b-instruct-q6_K
harbor chatui model llama3.1:8b-instruct-q6_K
harbor aichat model llama3.1:8b-instruct-q6_K

These aliases are all linked to the harbor config under the hood. It means that you can use either the alias or the config option directly - it'll work the same way.

Aliases are typically setup for the "hot" configuration options, like setting the model, version, or other frequently used ones.

Note

As you can see above, services accept different values for the model specifier, depending on their internal configuration. Refer to Services docs for detailed instructions on how to configure each service.

Versions:

# Setting service version via CLI alias
harbor vllm version 0.5.3
harbor webui version main
harbor mistralrs version 0.3
harbor chatui version latest
harbor comfyui version latest-cuda

Tip

Most Service CLIs will also have dedicated help entries showing all available options:

harbor <service> --help

One example of the option that doesn't typically have an alias are the service ports on the host. But you can still adjust them, see the section below!

Harbor Config

Service CLIs are aliases for the underlying Harbor Config. It can be accessed directly via harbor config command:

# Usage help
harbor config --help

harbor config works with the .env file as a key/value store, allowing you to set and get configuration values for services.

# See all configuration options
harbor config ls
# WEBUI_NAME    Harbor
# VLLM_VERSION  v0.5.3
# ... Many more options

# Get a specific value, accepts various aliases
harbor config get VLLM_VERSION
harbor config get vllm_version
harbor config get vllm.version

# Set a value, accepts same aliases as above
harbor config set webui.name v0.5.3

See a more detailed overview in the harbor config CLI reference

Using configuration files

Most of the services can be configured via either .env or a specific yaml, json, toml configuration (refer to service docs for specifics). These files are all stored in the Harbor workspace on your machine and can be edited directly.

# Show the path to the Harbor workspace
harbor home

# Open workspace in the file manager
open $(harbor home)

# Shortcut for VS Code users
harbor vscode

Environment Variables

Tip

You can add arbitrary environment variables to the .env file in the workspace and they will be available to all services. Alternatively, you can use harbor env for variables that'll only be visible to that service.

# Set a service-specific environment variable
harbor env ollama OLLAMA_DEBUG 1

# Show all environment variables for a service
harbor env ollama

# Get env var value
harbor env ollama OLLAMA_DEBUG

# Open the global .env file with default editor
open $(harbor home)/.env

# Open service-specific env overrides
open $(harbor home)/ollama/override.env

You can find all the external tracked configuration folders by listing related configurations:

harbor config ls | grep CONFIG_PATH

Certain services will also have a dedicated cache folder, which can be found in the same way:

harbor config ls | grep CACHE

# One-liner to open one of the cache folders
open $(eval echo "$(harbor config get hf.cache)")

Harbor Profiles

When you have multiple configurations for different use-cases, you can save them as profiles for easy switching. Profiles include everything that can be configured via harbor config (most of the settings configured via CLI) and are stored in the Harbor workspace. They are just a way to swap between .env files using command line.

# Use a profile
$ ▼ h profile use default
21:49:20 [INFO] Profile 'default' loaded.

# Check settings specific to the vLLM service
$ ▼ h config ls | grep VLLM
VLLM_CACHE                     ~/.cache/vllm
VLLM_HOST_PORT                 33911
VLLM_VERSION                   v0.6.0
VLLM_MODEL                     microsoft/Phi-3.5-mini-instruct
VLLM_EXTRA_ARGS
VLLM_ATTENTION_BACKEND         FLASH_ATTN
VLLM_MODEL_SPECIFIER           --model microsoft/Phi-3.5-mini-instruct

# Switch to another profile
$ ▼ h profile use phimoe
21:49:42 [INFO] Profile 'phimoe' loaded.

# vLLM settings are different now
$ ▼ h config ls | grep VLLM
VLLM_CACHE                     ~/.cache/vllm
VLLM_HOST_PORT                 33911
VLLM_VERSION                   v0.6.1.post2
VLLM_MODEL                     microsoft/Phi-3.5-MoE-instruct
VLLM_EXTRA_ARGS                --max-model-len 1024 --trust-remote-code --cpu-offload-gb 56 --enforce-eager --gpu-memory-utilization 0 --device cpu
VLLM_ATTENTION_BACKEND         FLASH_ATTN
VLLM_MODEL_SPECIFIER           --model microsoft/Phi-3.5-MoE-instruct

There are a few considerations when using profiles:

  • When the profile is loaded, modifications are not saved by default and will be lost when switching to another profile (or reloading the current one). Use harbor profile save <name> to persist the changes after making them
  • Profiles are stored in the Harbor workspace and can be shared between different Harbor instances
  • Profiles are not versioned and are not guaranteed to work between different Harbor versions
  • You can also edit profiles as .env files in the workspace, it's not necessary to use the CLI

Using Satellite Services

Harbor comes with a variety of satellite services intended to be used with LLMs. These services can be started, stopped and configured in the same way as the Frontends and Backends.

SearXNG (Web Search)

SearXNG is a great example of a satellite service that is exceptionally useful for LLMs and on its own.

harbor up searxng

Open WebUI (and many other built-in services) will automatically use it for Web RAG functionality.

Text-to-Speech

Start TTS service:

harbor up tts

Configure voices in tts/config/voice_to_speaker.yaml

Other Satellites

Please refer to the Satellites section in the Services documentation for even more additional services and their configuration in Harbor.

Troubleshooting

Harbor combines many services and configurations together, all of which are in the constant motion, receiving updates and changes. This can sometimes lead to issues. Harbor comes with a set of tools to help you diagnose and fix these issues.

Service State

When something unexpected happens, the very first thing to check is the state of the service.

# See currently running services
# [-a] - active services only
harbor ls -a

# See "docker ps" specifically for services managed by Harbor
harbor ps

Service Logs

If service is not running - it likely didn't start or crashed right after starting. You can check the logs to see what happened.

# Show logs for all running services
harbor logs

# Show logs for a specific service
harbor logs <service handle>

# Start tailing logs immediately
# after the service is started
harbor up <service handle> --tail

Commands in the service container

You can always inspect the service as it runs via Bash shell, Harbor has a few built-in helpers to simplify this process:

# Open a shell in the service container
harbor shell <service handle>

# Exec a command in a running service
harbor exec <service handle> <command>

# Run a command in a service container starting it if necessary
harbor run <service handle> <command>

File System Permissions

Mounting volumes in Docker often comes with access permission issues, as the Docker mounts your local files into the container with same exact set of permissions as on your host machine, so if on your host, User with ID 1000 owns the files and the container happens to run with the User ID 100 - it won't be able to access the files. Vice versa, when the container's User writes something to a volume - your host User might not be able to access these files due to permissions being set to the container's User.

Harbor have an unsafe utility (Linux only, use at your own discretion) to fix such issues at locations managed by Harbor.

# Will walk through all folders managed by Harbor,
# including global service caches
# and will apply FS ACLs to match UID/GID of containers
# used by Harbor
harbor fixfs

Accessing Service URLs

Get URL for any service:

harbor url <service handle>

See the harbor url CLI Reference for more information.

See the harbor logs CLI Reference for more information.

Stopping Services

# Stop all running services
harbor down

# 💡 TIP: many Harbor commands have aliases
harbor d

# Stop a specific service
harbor down <service handle>
harbor d <service handle>

Next steps

Clone this wiki locally