diff --git a/README.md b/README.md index b4b7b1e67f..5ade3a0924 100644 --- a/README.md +++ b/README.md @@ -221,23 +221,17 @@ For cloud GPU providers that support docker images, use [`winglian/axolotl-cloud python get-pip.py ``` - 3. Install torch - ```bash - pip3 install -U torch --index-url https://download.pytorch.org/whl/cu118 - ``` + 3. Install Pytorch https://pytorch.org/get-started/locally/ - 4. Axolotl - ```bash - git clone https://github.com/OpenAccess-AI-Collective/axolotl - cd axolotl + 4. Follow instructions on quickstart. - pip3 install packaging - pip3 install -e '.[flash-attn,deepspeed]' + 5. Run + ```bash pip3 install protobuf==3.20.3 pip3 install -U --ignore-installed requests Pillow psutil scipy ``` - 5. Set path + 6. Set path ```bash export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH ``` @@ -389,66 +383,6 @@ See [examples](examples) for quick start. It is recommended to duplicate and mod See [these docs](docs/config.qmd) for all config options. -
- Understanding of batch size and gradient accumulation steps -
-Gradient accumulation means accumulating gradients over several mini-batches and updating the model weights afterward. When the samples in each batch are diverse, this technique doesn't significantly impact learning. - -This method allows for effective training with larger effective batch sizes without needing proportionally larger memory. Here's why: - -1. **Memory Consumption with Batch Size**: The primary reason increasing the batch size impacts memory is due to the storage requirements for intermediate activations. When you forward propagate a batch through a network, you have to store the activations at each layer for each sample in the batch, because these activations are used during backpropagation to compute gradients. Therefore, larger batches mean more activations, leading to greater GPU memory consumption. - -2. **Gradient Accumulation**: With gradient accumulation, you're effectively simulating a larger batch size by accumulating gradients over several smaller batches (or micro-batches). However, at any given time, you're only forward and backward propagating a micro-batch. This means you only store activations for the micro-batch, not the full accumulated batch. As a result, you can simulate the effect of a larger batch size without the memory cost of storing activations for a large batch. - -**Example 1:** -Micro batch size: 3 -Gradient accumulation steps: 2 -Number of GPUs: 3 -Total batch size = 3 * 2 * 3 = 18 - -``` -| GPU 1 | GPU 2 | GPU 3 | -|----------------|----------------|----------------| -| S1, S2, S3 | S4, S5, S6 | S7, S8, S9 | -| e1, e2, e3 | e4, e5, e6 | e7, e8, e9 | -|----------------|----------------|----------------| -| → (accumulate) | → (accumulate) | → (accumulate) | -|----------------|----------------|----------------| -| S10, S11, S12 | S13, S14, S15 | S16, S17, S18 | -| e10, e11, e12 | e13, e14, e15 | e16, e17, e18 | -|----------------|----------------|----------------| -| → (apply) | → (apply) | → (apply) | - -Accumulated gradient for the weight w1 after the second iteration (considering all GPUs): -Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8 + e9 + e10 + e11 + e12 + e13 + e14 + e15 + e16 + e17 + e18 - -Weight update for w1: -w1_new = w1_old - learning rate x (Total gradient for w1 / 18) -``` - -**Example 2:** -Micro batch size: 2 -Gradient accumulation steps: 1 -Number of GPUs: 3 -Total batch size = 2 * 1 * 3 = 6 - -``` -| GPU 1 | GPU 2 | GPU 3 | -|-----------|-----------|-----------| -| S1, S2 | S3, S4 | S5, S6 | -| e1, e2 | e3, e4 | e5, e6 | -|-----------|-----------|-----------| -| → (apply) | → (apply) | → (apply) | - -Accumulated gradient for the weight w1 (considering all GPUs): -Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 - -Weight update for w1: -w1_new = w1_old - learning rate × (Total gradient for w1 / 6) -``` - -
- ### Train Run @@ -678,14 +612,8 @@ Bugs? Please check the [open issues](https://github.com/OpenAccess-AI-Collective PRs are **greatly welcome**! -Please run below to setup env +Please run the quickstart instructions followed by the below to setup env: ```bash -git clone https://github.com/OpenAccess-AI-Collective/axolotl -cd axolotl - -pip3 install packaging ninja -pip3 install -e '.[flash-attn,deepspeed]' - pip3 install -r requirements-dev.txt -r requirements-tests.txt pre-commit install diff --git a/docs/batch_vs_grad.qmd b/docs/batch_vs_grad.qmd new file mode 100644 index 0000000000..e7b3b7d818 --- /dev/null +++ b/docs/batch_vs_grad.qmd @@ -0,0 +1,59 @@ +--- +title: Batch size vs Gradient accumulation +description: Understanding of batch size and gradient accumulation steps +--- + +Gradient accumulation means accumulating gradients over several mini-batches and updating the model weights afterward. When the samples in each batch are diverse, this technique doesn't significantly impact learning. + +This method allows for effective training with larger effective batch sizes without needing proportionally larger memory. Here's why: + +1. **Memory Consumption with Batch Size**: The primary reason increasing the batch size impacts memory is due to the storage requirements for intermediate activations. When you forward propagate a batch through a network, you have to store the activations at each layer for each sample in the batch, because these activations are used during backpropagation to compute gradients. Therefore, larger batches mean more activations, leading to greater GPU memory consumption. + +2. **Gradient Accumulation**: With gradient accumulation, you're effectively simulating a larger batch size by accumulating gradients over several smaller batches (or micro-batches). However, at any given time, you're only forward and backward propagating a micro-batch. This means you only store activations for the micro-batch, not the full accumulated batch. As a result, you can simulate the effect of a larger batch size without the memory cost of storing activations for a large batch. + +**Example 1:** +Micro batch size: 3 +Gradient accumulation steps: 2 +Number of GPUs: 3 +Total batch size = 3 * 2 * 3 = 18 + +``` +| GPU 1 | GPU 2 | GPU 3 | +|----------------|----------------|----------------| +| S1, S2, S3 | S4, S5, S6 | S7, S8, S9 | +| e1, e2, e3 | e4, e5, e6 | e7, e8, e9 | +|----------------|----------------|----------------| +| → (accumulate) | → (accumulate) | → (accumulate) | +|----------------|----------------|----------------| +| S10, S11, S12 | S13, S14, S15 | S16, S17, S18 | +| e10, e11, e12 | e13, e14, e15 | e16, e17, e18 | +|----------------|----------------|----------------| +| → (apply) | → (apply) | → (apply) | + +Accumulated gradient for the weight w1 after the second iteration (considering all GPUs): +Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8 + e9 + e10 + e11 + e12 + e13 + e14 + e15 + e16 + e17 + e18 + +Weight update for w1: +w1_new = w1_old - learning rate x (Total gradient for w1 / 18) +``` + +**Example 2:** +Micro batch size: 2 +Gradient accumulation steps: 1 +Number of GPUs: 3 +Total batch size = 2 * 1 * 3 = 6 + +``` +| GPU 1 | GPU 2 | GPU 3 | +|-----------|-----------|-----------| +| S1, S2 | S3, S4 | S5, S6 | +| e1, e2 | e3, e4 | e5, e6 | +|-----------|-----------|-----------| +| → (apply) | → (apply) | → (apply) | + +Accumulated gradient for the weight w1 (considering all GPUs): +Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + +Weight update for w1: +w1_new = w1_old - learning rate × (Total gradient for w1 / 6) +``` diff --git a/docs/dataset-formats/conversation.qmd b/docs/dataset-formats/conversation.qmd index 9e69df4927..f7d0cac826 100644 --- a/docs/dataset-formats/conversation.qmd +++ b/docs/dataset-formats/conversation.qmd @@ -1,12 +1,10 @@ --- title: Conversation description: Conversation format for supervised fine-tuning. -order: 1 +order: 3 --- -## Formats - -### sharegpt +## sharegpt conversations where `from` is `human`/`gpt`. (optional: first row with role `system` to override default system prompt) @@ -14,15 +12,33 @@ conversations where `from` is `human`/`gpt`. (optional: first row with role `sys {"conversations": [{"from": "...", "value": "..."}]} ``` -Note: `type: sharegpt` opens a special config `conversation:` that enables conversions to many Conversation types. See [the docs](../docs/config.qmd) for all config options. +Note: `type: sharegpt` opens special configs: +- `conversation`: enables conversions to many Conversation types. Refer to the 'name' [here](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) for options. +- `roles`: allows you to specify the roles for input and output. This is useful for datasets with custom roles such as `tool` etc to support masking. +- `field_human`: specify the key to use instead of `human` in the conversation. +- `field_model`: specify the key to use instead of `gpt` in the conversation. + +```yaml +datasets: + path: ... + type: sharegpt + + conversation: # Options (see Conversation 'name'): https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py + field_human: # Optional[str]. Human key to use for conversation. + field_model: # Optional[str]. Assistant key to use for conversation. + # Add additional keys from your dataset as input or output roles + roles: + input: # Optional[List[str]]. These will be masked based on train_on_input + output: # Optional[List[str]]. +``` -### pygmalion +## pygmalion ```{.json filename="data.jsonl"} {"conversations": [{"role": "...", "value": "..."}]} ``` -### sharegpt.load_role +## sharegpt.load_role conversations where `role` is used instead of `from` @@ -30,7 +46,7 @@ conversations where `role` is used instead of `from` {"conversations": [{"role": "...", "value": "..."}]} ``` -### sharegpt.load_guanaco +## sharegpt.load_guanaco conversations where `from` is `prompter` `assistant` instead of default sharegpt @@ -38,34 +54,10 @@ conversations where `from` is `prompter` `assistant` instead of default sharegpt {"conversations": [{"from": "...", "value": "..."}]} ``` -### sharegpt_jokes +## sharegpt_jokes creates a chat where bot is asked to tell a joke, then explain why the joke is funny ```{.json filename="data.jsonl"} {"conversations": [{"title": "...", "text": "...", "explanation": "..."}]} ``` - -## How to add custom prompts for instruction-tuning - -For a dataset that is preprocessed for instruction purposes: - -```{.json filename="data.jsonl"} -{"input": "...", "output": "..."} -``` - -You can use this example in your YAML config: - -```{.yaml filename="config.yaml"} -datasets: - - path: repo - type: - system_prompt: "" - field_system: system - field_instruction: input - field_output: output - format: "[INST] {instruction} [/INST]" - no_input_format: "[INST] {instruction} [/INST]" -``` - -See full config options under [here](../docs/config.qmd). diff --git a/docs/dataset-formats/inst_tune.qmd b/docs/dataset-formats/inst_tune.qmd index cc8cd16f30..d89c6adaf5 100644 --- a/docs/dataset-formats/inst_tune.qmd +++ b/docs/dataset-formats/inst_tune.qmd @@ -163,3 +163,27 @@ instruction, adds additional eos tokens ```{.json filename="data.jsonl"} {"prompt": "...", "generation": "..."} ``` + +## How to add custom prompt format + +For a dataset that is preprocessed for instruction purposes: + +```{.json filename="data.jsonl"} +{"input": "...", "output": "..."} +``` + +You can use this example in your YAML config: + +```{.yaml filename="config.yaml"} +datasets: + - path: repo + type: + system_prompt: "" + field_system: system + field_instruction: input + field_output: output + format: "[INST] {instruction} [/INST]" + no_input_format: "[INST] {instruction} [/INST]" +``` + +See full config options under [here](../config.qmd). diff --git a/docs/dataset-formats/pretraining.qmd b/docs/dataset-formats/pretraining.qmd index 7e7257205a..bb591328e2 100644 --- a/docs/dataset-formats/pretraining.qmd +++ b/docs/dataset-formats/pretraining.qmd @@ -1,7 +1,7 @@ --- title: Pre-training description: Data format for a pre-training completion task. -order: 3 +order: 1 --- For pretraining, there is no prompt template or roles. The only required field is `text`: diff --git a/docs/input_output.qmd b/docs/input_output.qmd index 6261f23895..3762901b31 100644 --- a/docs/input_output.qmd +++ b/docs/input_output.qmd @@ -43,7 +43,7 @@ labels so that your model can focus on predicting the outputs only. ### You may not want prompt templates However, there are many situations where you don't want to use one of -these formats or templates (I usually don't!). This is because they can: +these formats or templates. This is because they can: - Add unnecessary boilerplate to your prompts. - Create artifacts like special delimiters `<|im_start|>` that can