Skip to content

Commit 1e9b115

Browse files
authored
chore(docs): enhancements and clarifications (#6433)
chore(docs): Small enhancements Fixes: #6250 Relates to: #6251 Fixes: #6249 Fixes: #6250 Fixes: #6253 Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent cd1e112 commit 1e9b115

File tree

3 files changed

+53
-2
lines changed

3 files changed

+53
-2
lines changed

README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,13 @@ For more installation options, see [Installer Options](https://localai.io/docs/a
118118

119119
Or run with docker:
120120

121+
> **💡 Docker Run vs Docker Start**
122+
>
123+
> - `docker run` creates and starts a new container. If a container with the same name already exists, this command will fail.
124+
> - `docker start` starts an existing container that was previously created with `docker run`.
125+
>
126+
> If you've already run LocalAI before and want to start it again, use: `docker start -i local-ai`
127+
121128
### CPU only image:
122129

123130
```bash

docs/content/docs/faq.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,36 @@ Here are answers to some of the most common questions.
1616

1717
Most gguf-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=gguf, or models from gpt4all are compatible too: https://github.com/nomic-ai/gpt4all.
1818

19+
### Where are models stored?
20+
21+
LocalAI stores downloaded models in the following locations by default:
22+
23+
- **Command line**: `./models` (relative to current working directory)
24+
- **Docker**: `/models` (inside the container, typically mounted to `./models` on host)
25+
- **Launcher application**: `~/.localai/models` (in your home directory)
26+
27+
You can customize the model storage location using the `LOCALAI_MODELS_PATH` environment variable or `--models-path` command line flag. This is useful if you want to store models outside your home directory for backup purposes or to avoid filling up your home directory with large model files.
28+
29+
### How much storage space do models require?
30+
31+
Model sizes vary significantly depending on the model and quantization level:
32+
33+
- **Small models (1-3B parameters)**: 1-3 GB
34+
- **Medium models (7-13B parameters)**: 4-8 GB
35+
- **Large models (30B+ parameters)**: 15-30+ GB
36+
37+
**Quantization levels** (smaller files, slightly reduced quality):
38+
- `Q4_K_M`: ~75% of original size
39+
- `Q4_K_S`: ~60% of original size
40+
- `Q2_K`: ~50% of original size
41+
42+
**Storage recommendations**:
43+
- Ensure you have at least 2-3x the model size available for downloads and temporary files
44+
- Use SSD storage for better performance
45+
- Consider the model size relative to your system RAM - models larger than your RAM may not run efficiently
46+
47+
The WebUI shows model sizes in the Models tab to help you choose appropriate models for your system.
48+
1949
### Benchmarking LocalAI and llama.cpp shows different results!
2050

2151
LocalAI applies a set of defaults when loading models with the llama.cpp backend, one of these is mirostat sampling - while it achieves better results, it slows down the inference. You can disable this by setting `mirostat: 0` in the model config file. See also the advanced section ({{%relref "docs/advanced/advanced-usage" %}}) for more information and [this issue](https://github.com/mudler/LocalAI/issues/2780).

docs/content/docs/getting-started/quickstart.md

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,8 @@ If you are exposing LocalAI remotely, make sure you protect the API endpoints ad
2525
curl https://localai.io/install.sh | sh
2626
```
2727

28+
The bash installer, if docker is not detected, will install automatically as a systemd service.
29+
2830
See [Installer]({{% relref "docs/advanced/installer" %}}) for all the supported options
2931

3032
### macOS Download
@@ -35,6 +37,16 @@ See [Installer]({{% relref "docs/advanced/installer" %}}) for all the supported
3537

3638
### Run with docker
3739

40+
{{% alert icon="💡" %}}
41+
**Docker Run vs Docker Start**
42+
43+
- `docker run` creates and starts a new container. If a container with the same name already exists, this command will fail.
44+
- `docker start` starts an existing container that was previously created with `docker run`.
45+
46+
If you've already run LocalAI before and want to start it again, use: `docker start -i local-ai`
47+
{{% /alert %}}
48+
49+
The following commands will automatically start with a web interface and a Rest API on port `8080`.
3850

3951
#### CPU only image:
4052

@@ -93,7 +105,9 @@ docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel
93105
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas
94106
```
95107

96-
### Load models:
108+
### Downloading models on start
109+
110+
When starting LocalAI (either via Docker or via CLI) you can specify as argument a list of models to install automatically before starting the API, for example:
97111

98112
```bash
99113
# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
@@ -112,7 +126,7 @@ local-ai run oci://localai/phi-2:latest
112126
**Automatic Backend Detection**: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see [GPU Acceleration]({{% relref "docs/features/gpu-acceleration#automatic-backend-detection" %}}).
113127
{{% /alert %}}
114128

115-
For a full list of options, refer to the [Installer Options]({{% relref "docs/advanced/installer" %}}) documentation.
129+
For a full list of options, you can run LocalAI with `--help` or refer to the [Installer Options]({{% relref "docs/advanced/installer" %}}) documentation.
116130

117131
Binaries can also be [manually downloaded]({{% relref "docs/reference/binaries" %}}).
118132

0 commit comments

Comments
 (0)