diff --git a/.github/workflows/basic-tests-windows.yml b/.github/workflows/basic-tests-windows.yml index c1b24b87..a09588db 100644 --- a/.github/workflows/basic-tests-windows.yml +++ b/.github/workflows/basic-tests-windows.yml @@ -37,6 +37,7 @@ jobs: python -m pip install --upgrade pip pip install pytest nbval if [ -f requirements.txt ]; then pip install -r requirements.txt; fi + pip install matplotlib==3.9.0 - name: Test Selected Python Scripts shell: bash diff --git a/.gitignore b/.gitignore index f60d5a1f..77c9c565 100644 --- a/.gitignore +++ b/.gitignore @@ -85,6 +85,8 @@ ch07/01_main-chapter-code/instruction-data-with-response-alpaca52k.json ch07/01_main-chapter-code/instruction-data-with-response-lora.json ch07/01_main-chapter-code/instruction-data-with-response-phi3-prompt.json ch07/02_dataset-utilities/instruction-examples-modified.json +ch07/04_preference-tuning-with-dpo/gpt2-medium355M-sft.pth +ch07/04_preference-tuning-with-dpo/loss-plot.pdf # Temporary OS-related files .DS_Store diff --git a/README.md b/README.md index 8d5764fc..67096a0e 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,9 @@ The method described in this book for training and developing your own small-but - [Link to the book page on Amazon](https://www.amazon.com/gp/product/1633437167) - ISBN 9781633437166 + + +

@@ -58,14 +61,14 @@ Alternatively, you can view this and other files on GitHub at [https://github.co | Chapter Title | Main Code (for quick access) | All Code + Supplementary | |------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------| -| [Setup recommendations](setup) | - | - | +| [Setup recommendations](setup) | - | - | | Ch 1: Understanding Large Language Models | No code | - | -| Ch 2: Working with Text Data | - [ch02.ipynb](ch02/01_main-chapter-code/ch02.ipynb)
- [dataloader.ipynb](ch02/01_main-chapter-code/dataloader.ipynb) (summary)
- [exercise-solutions.ipynb](ch02/01_main-chapter-code/exercise-solutions.ipynb) | [./ch02](./ch02) | -| Ch 3: Coding Attention Mechanisms | - [ch03.ipynb](ch03/01_main-chapter-code/ch03.ipynb)
- [multihead-attention.ipynb](ch03/01_main-chapter-code/multihead-attention.ipynb) (summary)
- [exercise-solutions.ipynb](ch03/01_main-chapter-code/exercise-solutions.ipynb)| [./ch03](./ch03) | +| Ch 2: Working with Text Data | - [ch02.ipynb](ch02/01_main-chapter-code/ch02.ipynb)
- [dataloader.ipynb](ch02/01_main-chapter-code/dataloader.ipynb) (summary)
- [exercise-solutions.ipynb](ch02/01_main-chapter-code/exercise-solutions.ipynb) | [./ch02](./ch02) | +| Ch 3: Coding Attention Mechanisms | - [ch03.ipynb](ch03/01_main-chapter-code/ch03.ipynb)
- [multihead-attention.ipynb](ch03/01_main-chapter-code/multihead-attention.ipynb) (summary)
- [exercise-solutions.ipynb](ch03/01_main-chapter-code/exercise-solutions.ipynb)| [./ch03](./ch03) | | Ch 4: Implementing a GPT Model from Scratch | - [ch04.ipynb](ch04/01_main-chapter-code/ch04.ipynb)
- [gpt.py](ch04/01_main-chapter-code/gpt.py) (summary)
- [exercise-solutions.ipynb](ch04/01_main-chapter-code/exercise-solutions.ipynb) | [./ch04](./ch04) | | Ch 5: Pretraining on Unlabeled Data | - [ch05.ipynb](ch05/01_main-chapter-code/ch05.ipynb)
- [gpt_train.py](ch05/01_main-chapter-code/gpt_train.py) (summary)
- [gpt_generate.py](ch05/01_main-chapter-code/gpt_generate.py) (summary)
- [exercise-solutions.ipynb](ch05/01_main-chapter-code/exercise-solutions.ipynb) | [./ch05](./ch05) | | Ch 6: Finetuning for Text Classification | - [ch06.ipynb](ch06/01_main-chapter-code/ch06.ipynb)
- [gpt_class_finetune.py](ch06/01_main-chapter-code/gpt_class_finetune.py)
- [exercise-solutions.ipynb](ch06/01_main-chapter-code/exercise-solutions.ipynb) | [./ch06](./ch06) | -| Ch 7: Finetuning to Follow Instructions | - [ch07.ipynb](ch07/01_main-chapter-code/ch07.ipynb)
- [gpt_instruction_finetuning.py](ch07/01_main-chapter-code/gpt_instruction_finetuning.py)
- [ollama_evaluate.py](ch07/01_main-chapter-code/ollama_evaluate.py)
- [exercise-solutions.ipynb](ch07/01_main-chapter-code/exercise-solutions.ipynb) | [./ch07](./ch07) | +| Ch 7: Finetuning to Follow Instructions | - [ch07.ipynb](ch07/01_main-chapter-code/ch07.ipynb)
- [gpt_instruction_finetuning.py](ch07/01_main-chapter-code/gpt_instruction_finetuning.py) (summary)
- [ollama_evaluate.py](ch07/01_main-chapter-code/ollama_evaluate.py) (summary)
- [exercise-solutions.ipynb](ch07/01_main-chapter-code/exercise-solutions.ipynb) | [./ch07](./ch07) | | Appendix A: Introduction to PyTorch | - [code-part1.ipynb](appendix-A/01_main-chapter-code/code-part1.ipynb)
- [code-part2.ipynb](appendix-A/01_main-chapter-code/code-part2.ipynb)
- [DDP-script.py](appendix-A/01_main-chapter-code/DDP-script.py)
- [exercise-solutions.ipynb](appendix-A/01_main-chapter-code/exercise-solutions.ipynb) | [./appendix-A](./appendix-A) | | Appendix B: References and Further Reading | No code | - | | Appendix C: Exercise Solutions | No code | - | @@ -118,6 +121,7 @@ Several folders contain optional materials as a bonus for interested readers: - [Evaluating Instruction Responses Using the OpenAI API and Ollama](ch07/03_model-evaluation) - [Generating a Dataset for Instruction Finetuning](ch07/05_dataset-generation) - [Generating a Preference Dataset with Llama 3.1 70B and Ollama](ch07/04_preference-tuning-with-dpo/create-preference-data-ollama.ipynb) + - [Direct Preference Optimization (DPO) for LLM Alignment](ch07/04_preference-tuning-with-dpo/dpo-from-scratch.ipynb)
  diff --git a/ch05/01_main-chapter-code/ch05.ipynb b/ch05/01_main-chapter-code/ch05.ipynb index 792bcc43..394777f0 100644 --- a/ch05/01_main-chapter-code/ch05.ipynb +++ b/ch05/01_main-chapter-code/ch05.ipynb @@ -2406,7 +2406,7 @@ "id": "6d079f98-a7c4-462e-8416-5a64f670861c", "metadata": {}, "source": [ - "- We know that we loaded the model weights correctly because the model can generate coherent text; if we made even a small mistake, the mode would not be able to do that" + "- We know that we loaded the model weights correctly because the model can generate coherent text; if we made even a small mistake, the model would not be able to do that" ] }, { diff --git a/ch06/02_bonus_additional-experiments/additional-experiments.py b/ch06/02_bonus_additional-experiments/additional-experiments.py index 6246c61b..bdb94b34 100644 --- a/ch06/02_bonus_additional-experiments/additional-experiments.py +++ b/ch06/02_bonus_additional-experiments/additional-experiments.py @@ -259,7 +259,8 @@ def train_classifier_simple(model, train_loader, val_loader, optimizer, device, loss.backward() # Calculate loss gradients # Use gradient accumulation if accumulation_steps > 1 - if batch_idx % accumulation_steps == 0: + is_update_step = ((batch_idx + 1) % accumulation_steps == 0) or ((batch_idx + 1) == len(train_loader)) + if is_update_step: optimizer.step() # Update model weights using loss gradients optimizer.zero_grad() # Reset loss gradients from previous batch iteration diff --git a/ch07/01_main-chapter-code/ch07.ipynb b/ch07/01_main-chapter-code/ch07.ipynb index 892c8b07..71f3d2a6 100644 --- a/ch07/01_main-chapter-code/ch07.ipynb +++ b/ch07/01_main-chapter-code/ch07.ipynb @@ -2722,7 +2722,7 @@ "- I hope you enjoyed this journey of implementing an LLM from the ground up and coding the pretraining and finetuning functions\n", "- In my opinion, implementing an LLM from scratch is the best way to understand how LLMs work; I hope you gained a better understanding through this approach\n", "- While this book serves educational purposes, you may be interested in using different and more powerful LLMs for real-world applications\n", - " - For this, you may consider popular tools such as axolotl ([https://github.com/OpenAccess-AI-Collective/axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)) or LitGPT ([https://github.com/Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt), which I help developing" + " - For this, you may consider popular tools such as axolotl ([https://github.com/OpenAccess-AI-Collective/axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)) or LitGPT ([https://github.com/Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt)), which I help developing" ] }, { @@ -2762,7 +2762,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.10.11" } }, "nbformat": 4, diff --git a/ch07/04_preference-tuning-with-dpo/README.md b/ch07/04_preference-tuning-with-dpo/README.md index 9c274642..3b71a647 100644 --- a/ch07/04_preference-tuning-with-dpo/README.md +++ b/ch07/04_preference-tuning-with-dpo/README.md @@ -2,11 +2,6 @@ - [create-preference-data-ollama.ipynb](create-preference-data-ollama.ipynb): A notebook that creates a synthetic dataset for preference finetuning dataset using Llama 3.1 and Ollama -- In progress ... +- [dpo-from-scratch.ipynb](dpo-from-scratch.ipynb): This notebook implements Direct Preference Optimization (DPO) for LLM alignment - -In the meantime, also see - -- LLM Training: RLHF and Its Alternatives, [https://magazine.sebastianraschka.com/p/llm-training-rlhf-and-its-alternatives](https://magazine.sebastianraschka.com/p/llm-training-rlhf-and-its-alternatives) -- Tips for LLM Pretraining and Evaluating Reward Models, [https://sebastianraschka.com/blog/2024/research-papers-in-march-2024.html](https://sebastianraschka.com/blog/2024/research-papers-in-march-2024.html) diff --git a/ch07/04_preference-tuning-with-dpo/dpo-from-scratch.ipynb b/ch07/04_preference-tuning-with-dpo/dpo-from-scratch.ipynb new file mode 100644 index 00000000..29c5d6ed --- /dev/null +++ b/ch07/04_preference-tuning-with-dpo/dpo-from-scratch.ipynb @@ -0,0 +1,3096 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "62129596-d10f-45b1-a1af-ee10f358f773", + "metadata": { + "id": "62129596-d10f-45b1-a1af-ee10f358f773" + }, + "source": [ + "\n", + "\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "Supplementary code for the Build a Large Language Model From Scratch book by Sebastian Raschka
\n", + "
Code repository: https://github.com/rasbt/LLMs-from-scratch\n", + "
\n", + "
\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "b0bd2379-ed2f-4c77-8b71-f1f0242b9ff9", + "metadata": { + "id": "b0bd2379-ed2f-4c77-8b71-f1f0242b9ff9" + }, + "source": [ + "# Direct Preference Optimization (DPO) for LLM Alignment (From Scratch)" + ] + }, + { + "cell_type": "markdown", + "id": "d04cb2b8-d87b-4c6b-a225-c630d758f68e", + "metadata": { + "id": "d04cb2b8-d87b-4c6b-a225-c630d758f68e" + }, + "source": [ + "- This code notebook implements Direct Preference Optimization (DPO) from scratch and applies it to a large language model (LLM) to enhance its ability to generate responses that align more closely with user preferences" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "pxMGAf3bnVwn", + "metadata": { + "id": "pxMGAf3bnVwn" + }, + "outputs": [], + "source": [ + "# !pip install -r https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/requirements.txt" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "edb3e145-fbaa-4bb3-9e95-186b4145087f", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "edb3e145-fbaa-4bb3-9e95-186b4145087f", + "outputId": "3d449525-76cc-4124-ab30-a93c6a9623ee" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tiktoken version: 0.7.0\n", + "torch version: 2.3.1+cu121\n" + ] + } + ], + "source": [ + "from importlib.metadata import version\n", + "\n", + "pkgs = [\n", + " \"tiktoken\", # Tokenizer\n", + " \"torch\", # Deep learning library\n", + "]\n", + "for p in pkgs:\n", + " print(f\"{p} version: {version(p)}\")" + ] + }, + { + "cell_type": "markdown", + "id": "49ec20a3-a26c-4f9b-8a33-bfd3d67860e2", + "metadata": { + "id": "49ec20a3-a26c-4f9b-8a33-bfd3d67860e2" + }, + "source": [ + " \n", + "# 1) A brief introduction to DPO" + ] + }, + { + "cell_type": "markdown", + "id": "17804afd-786b-4600-bad0-f5805454e3d6", + "metadata": { + "id": "17804afd-786b-4600-bad0-f5805454e3d6" + }, + "source": [ + "- DPO, proposed in the paper [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://arxiv.org/abs/2305.18290), is an alternative to reinforcement learning from human feedback (RLHF) used in finetuning large language models (LLMs)\n", + "- DPO can be used to finetune (or align) the model to generate responses that better align with user expectations and instructions\n", + "\n", + "\n", + "\n", + "- In instruction finetuning, we train the LLM to generate correct answers given a prompt\n", + "- However, in practice, there are multiple ways to give a correct answer, and correct answers can differ in style; for example, consider a technical and a more user-friendly response when asking an LLM to give recommendations when buying a laptop, as shown in the figure below\n", + "\n", + "\n", + "\n", + "- RLHF and DPO are methods that can be used to teach the LLM to prefer one answer style over the other, that is, aligning better with user preferences\n", + "- The RLHF process, which requires training a separate reward model, is outlined below\n", + "\n", + "" + ] + }, + { + "cell_type": "markdown", + "id": "9073622f-d537-42bf-8778-43c2adaa2191", + "metadata": { + "id": "9073622f-d537-42bf-8778-43c2adaa2191" + }, + "source": [ + "- Compared to RLHF, DPO aims to simplify the process by optimizing models directly for user preferences without the need for complex reward modeling and policy optimization\n", + "- In other words, DPO focuses on directly optimizing the model's output to align with human preferences or specific objectives\n", + "- Shown below is the main idea as an overview of how DPO works\n", + "\n", + "" + ] + }, + { + "cell_type": "markdown", + "id": "c894134a-315c-453e-bbc1-387794b3f4d6", + "metadata": { + "id": "c894134a-315c-453e-bbc1-387794b3f4d6" + }, + "source": [ + "- The concrete equation to implement the DPO loss is shown below; we will revisit the equation when we implement it in Python further down in this code notebook\n", + "\n", + "" + ] + }, + { + "cell_type": "markdown", + "id": "dd7491b5-f619-4501-ad39-2942de57c115", + "metadata": { + "id": "dd7491b5-f619-4501-ad39-2942de57c115" + }, + "source": [ + "- In the equation above,\n", + " - \"expected value\" $\\mathbb{E}$ is statistics jargon and stands for the average or mean value of the random variable (the expression inside the brackets)\n", + " - The $\\pi_{\\theta}$ variable is the so-called policy (a term borrowed from reinforcement learning) and represents the LLM we want to optimize; $\\pi_{ref}$ is a reference LLM, which is typically the original LLM before optimization (at the beginning of the training, $\\pi_{\\theta}$ and $\\pi_{ref}$ are typically the same)\n", + " - $\\beta$ is a hyperparameter to control the divergence between the $\\pi_{\\theta}$ and the reference model; increasing $\\beta$ increases the impact of the difference between\n", + "$\\pi_{\\theta}$ and $\\pi_{ref}$ in terms of their log probabilities on the overall loss function, thereby increasing the divergence between the two models\n", + "- To avoid bloating the code notebook with a more detailed discussion, I may write a separate standalone article with more details on these concepts in the future\n", + "- In the meantime, if you are interested in comparing RLHF and DPO, please see the section [2.2. RLHF vs Direct Preference Optimization (DPO)](https://magazine.sebastianraschka.com/i/142924793/rlhf-vs-direct-preference-optimization-dpo) in my article [Tips for LLM Pretraining and Evaluating Reward Models](https://magazine.sebastianraschka.com/p/tips-for-llm-pretraining-and-evaluating-rms)" + ] + }, + { + "cell_type": "markdown", + "id": "xqVAgsyQ6LuG", + "metadata": { + "id": "xqVAgsyQ6LuG", + "tags": [] + }, + "source": [ + " \n", + "# 2) Preparing a preference dataset for DPO" + ] + }, + { + "cell_type": "markdown", + "id": "60b2195d-8734-469b-a52e-5031ca7ea6b1", + "metadata": { + "id": "60b2195d-8734-469b-a52e-5031ca7ea6b1" + }, + "source": [ + "- Let's begin by loading and preparing the dataset, which may already answer a lot of the questions you might have before we revisit the DPO loss equation\n", + "- Here, we work with a dataset that contains more polite and less polite responses to instruction prompts (concrete examples are shown in the next section)\n", + "- The dataset was generated via the [create-preference-data-ollama.ipynb](create-preference-data-ollama.ipynb) notebook" + ] + }, + { + "cell_type": "markdown", + "id": "wHLB62Nj7haD", + "metadata": { + "id": "wHLB62Nj7haD" + }, + "source": [ + " \n", + "## 2.1) Loading a preference dataset" + ] + }, + { + "cell_type": "markdown", + "id": "13e09f99-1b18-4923-ba36-af46d8e3075f", + "metadata": { + "id": "13e09f99-1b18-4923-ba36-af46d8e3075f" + }, + "source": [ + "- The dataset is a json file with 1100 entries:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "5266e66c-5ec0-45e6-a654-148971f6aee7", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "5266e66c-5ec0-45e6-a654-148971f6aee7", + "outputId": "04e8ee70-3076-441d-d2bf-7641da3d0c1d" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Number of entries: 1100\n" + ] + } + ], + "source": [ + "import json\n", + "\n", + "\n", + "file_path = \"instruction-data-with-preference.json\"\n", + "\n", + "with open(file_path, \"r\", encoding=\"utf-8\") as file:\n", + " data = json.load(file)\n", + "\n", + "print(\"Number of entries:\", len(data))" + ] + }, + { + "cell_type": "markdown", + "id": "725d2b9a-d6d2-46e2-89f8-5ab87e040e3b", + "metadata": { + "id": "725d2b9a-d6d2-46e2-89f8-5ab87e040e3b" + }, + "source": [ + "- Let's take a look at two example entries:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5c11916f-9a26-4367-a16e-7b0c121a20a6", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "5c11916f-9a26-4367-a16e-7b0c121a20a6", + "outputId": "00a432cc-19b1-484f-80e2-e897ee5e4024" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'instruction': 'Identify the correct spelling of the following word.',\n", + " 'input': 'Ocassion',\n", + " 'output': \"The correct spelling is 'Occasion.'\",\n", + " 'rejected': \"The correct spelling is obviously 'Occasion.'\",\n", + " 'chosen': \"The correct spelling is 'Occasion.'\"}\n" + ] + } + ], + "source": [ + "import pprint\n", + "\n", + "pprint.pp(data[50])" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "01ef804a-8c13-4a0b-9b2e-b65a4d0a870d", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "01ef804a-8c13-4a0b-9b2e-b65a4d0a870d", + "outputId": "078cd643-83fb-4b42-ecf9-3256e8c9d239" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'instruction': \"What is an antonym of 'complicated'?\",\n", + " 'input': '',\n", + " 'output': \"An antonym of 'complicated' is 'simple'.\",\n", + " 'chosen': \"A suitable antonym for 'complicated' would be 'simple'.\",\n", + " 'rejected': \"An antonym of 'complicated' is 'simple'.\"}\n" + ] + } + ], + "source": [ + "pprint.pp(data[999])" + ] + }, + { + "cell_type": "markdown", + "id": "56db5697-a089-4b40-a1f3-e928e8018220", + "metadata": { + "id": "56db5697-a089-4b40-a1f3-e928e8018220" + }, + "source": [ + "\n", + "\n", + "```\n", + "# This is formatted as code\n", + "```\n", + "\n", + "- As we can see above, the dataset consists of 5 keys:\n", + " - The `'instruction'` and `'input'` that are used as LLM inputs\n", + " - The `'output'` contains the response the model was trained on via the instruction finetuning step in chapter 7\n", + " - the `'chosen'` and `'rejected'` entries are the entries we use for DPO; here `'chosen'` is the preferred response, and `'rejected'` is the dispreferred response\n", + "- The goal is to get the model to follow the style of the chosen over the rejected responses" + ] + }, + { + "cell_type": "markdown", + "id": "86257468-a6ab-4ba3-9c9f-2fdc2c0cc284", + "metadata": { + "id": "86257468-a6ab-4ba3-9c9f-2fdc2c0cc284" + }, + "source": [ + "- Below is a utility function that formats the model input by applying the Alpaca prompt style similar to chapter 7 ([../01_main-chapter-code/ch07.ipynb](../01_main-chapter-code/ch07.ipynb)):" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "4564d55c-1c5d-46a6-b5e8-46ab568ad627", + "metadata": { + "id": "4564d55c-1c5d-46a6-b5e8-46ab568ad627" + }, + "outputs": [], + "source": [ + "def format_input(entry):\n", + " instruction_text = (\n", + " f\"Below is an instruction that describes a task. \"\n", + " f\"Write a response that appropriately completes the request.\"\n", + " f\"\\n\\n### Instruction:\\n{entry['instruction']}\"\n", + " )\n", + "\n", + " input_text = f\"\\n\\n### Input:\\n{entry['input']}\" if entry[\"input\"] else \"\"\n", + "\n", + " return instruction_text + input_text" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "3f38b49f-63fd-48c5-bde8-a4717b7923ea", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "3f38b49f-63fd-48c5-bde8-a4717b7923ea", + "outputId": "9ad07c59-05b3-42ae-c5bc-68780aaf6780" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Identify the correct spelling of the following word.\n", + "\n", + "### Input:\n", + "Ocassion\n" + ] + } + ], + "source": [ + "model_input = format_input(data[50])\n", + "print(model_input)" + ] + }, + { + "cell_type": "markdown", + "id": "7dd9e4c9-88a3-463a-8c16-c60ed7e6b51e", + "metadata": { + "id": "7dd9e4c9-88a3-463a-8c16-c60ed7e6b51e" + }, + "source": [ + "- Similarly, we can format the chosen and rejected responses using the Alpaca prompt style:" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "8ad5831a-e936-44e5-a5cf-02953fe7d848", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "8ad5831a-e936-44e5-a5cf-02953fe7d848", + "outputId": "2c0a0cbf-c13d-43cf-fcc1-a4585c21e66f" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "### Response:\n", + "The correct spelling is 'Occasion.'\n" + ] + } + ], + "source": [ + "desired_response = f\"### Response:\\n{data[50]['chosen']}\"\n", + "print(desired_response)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "fc0991f6-fef7-48ab-8dee-fbd2863f784c", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "fc0991f6-fef7-48ab-8dee-fbd2863f784c", + "outputId": "cd85406c-3470-48f8-9792-63f91affd50a" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "### Response:\n", + "The correct spelling is obviously 'Occasion.'\n" + ] + } + ], + "source": [ + "possible_response = f\"### Response:\\n{data[50]['rejected']}\"\n", + "print(possible_response)" + ] + }, + { + "cell_type": "markdown", + "id": "6G3j2Q987t_g", + "metadata": { + "id": "6G3j2Q987t_g" + }, + "source": [ + " \n", + "## 2.2) Creating training, validation, and test splits" + ] + }, + { + "cell_type": "markdown", + "id": "53ce2b1e-32d7-414c-8e6b-01f21a2488c2", + "metadata": { + "id": "53ce2b1e-32d7-414c-8e6b-01f21a2488c2" + }, + "source": [ + "- Next, we divide the dataset into 3 subsets, 85% training data, 5% validation data, and 10% test data:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "36c7b919-8531-4e33-aebf-aaf8e6dbcfbd", + "metadata": { + "id": "36c7b919-8531-4e33-aebf-aaf8e6dbcfbd" + }, + "outputs": [], + "source": [ + "train_portion = int(len(data) * 0.85) # 85% for training\n", + "test_portion = int(len(data) * 0.1) # 10% for testing\n", + "val_portion = len(data) - train_portion - test_portion # Remaining 5% for validation\n", + "\n", + "train_data = data[:train_portion]\n", + "test_data = data[train_portion:train_portion + test_portion]\n", + "val_data = data[train_portion + test_portion:]" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "831a6c1b-119b-4622-9862-87f1db36e066", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "831a6c1b-119b-4622-9862-87f1db36e066", + "outputId": "8e017483-1a75-4336-9540-ac6a69104e27" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Training set length: 935\n", + "Validation set length: 55\n", + "Test set length: 110\n" + ] + } + ], + "source": [ + "print(\"Training set length:\", len(train_data))\n", + "print(\"Validation set length:\", len(val_data))\n", + "print(\"Test set length:\", len(test_data))" + ] + }, + { + "cell_type": "markdown", + "id": "c07d09f7-66af-49ed-8b9e-484f46e6a68d", + "metadata": { + "id": "c07d09f7-66af-49ed-8b9e-484f46e6a68d" + }, + "source": [ + " \n", + "## 2.3) Developing a `PreferenceDataset` class and batch processing function" + ] + }, + { + "cell_type": "markdown", + "id": "86101174-00c8-485d-8273-d086d5311926", + "metadata": { + "id": "86101174-00c8-485d-8273-d086d5311926" + }, + "source": [ + "- In this section, we rewrite the `InstructionDataset` class from chapter 7 ([../01_main-chapter-code/ch07.ipynb](../01_main-chapter-code/ch07.ipynb)) for DPO\n", + "- This means that instead of focusing on single output sequences (responses), we modify the dataset class to return pairs of responses where one is preferred (\"chosen\") over the other (\"rejected\")\n", + "- Overall, the `PreferenceDataset` is almost identical to the `InstructionDataset` used in chapter 7:" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "db08ad74-6dd4-4e40-b1e5-bc5f037d3d27", + "metadata": { + "id": "db08ad74-6dd4-4e40-b1e5-bc5f037d3d27" + }, + "outputs": [], + "source": [ + "import torch\n", + "from torch.utils.data import Dataset\n", + "\n", + "\n", + "class PreferenceDataset(Dataset):\n", + " def __init__(self, data, tokenizer):\n", + " self.data = data\n", + "\n", + " # Pre-tokenize texts\n", + " self.encoded_texts = []\n", + " for entry in data:\n", + " prompt = format_input(entry)\n", + " rejected_response = entry[\"rejected\"]\n", + " chosen_response = entry[\"chosen\"]\n", + "\n", + " prompt_tokens = tokenizer.encode(prompt)\n", + " chosen_full_text = f\"{prompt}\\n\\n### Response:\\n{chosen_response}\"\n", + " rejected_full_text = f\"{prompt}\\n\\n### Response:\\n{rejected_response}\"\n", + " chosen_full_tokens = tokenizer.encode(chosen_full_text)\n", + " rejected_full_tokens = tokenizer.encode(rejected_full_text)\n", + "\n", + " self.encoded_texts.append({\n", + " \"prompt\": prompt_tokens,\n", + " \"chosen\": chosen_full_tokens,\n", + " \"rejected\": rejected_full_tokens,\n", + " })\n", + "\n", + " def __getitem__(self, index):\n", + " return self.encoded_texts[index]\n", + "\n", + " def __len__(self):\n", + " return len(self.data)\n" + ] + }, + { + "cell_type": "markdown", + "id": "2325d183-75b9-400a-80ac-0b8d2f526561", + "metadata": { + "id": "2325d183-75b9-400a-80ac-0b8d2f526561" + }, + "source": [ + "- Along with an updated `PreferenceDataset` class, we also need an updated batch collation function that we use to pad the sequences in each batch to an equal length so that we can assemble them in batches\n", + "- I added comments to the code below to illustrate the process; however, it might be easiest to understand how it works by looking at the example inputs and outputs further below:" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "8d3a43a6-7704-4bff-9bbc-a38632374f30", + "metadata": { + "id": "8d3a43a6-7704-4bff-9bbc-a38632374f30" + }, + "outputs": [], + "source": [ + "def custom_collate_fn(\n", + " batch,\n", + " pad_token_id=50256,\n", + " allowed_max_length=None,\n", + " mask_prompt_tokens=True,\n", + " device=\"cpu\"\n", + "):\n", + " # Initialize lists to hold batch data\n", + " batch_data = {\n", + " \"prompt\": [],\n", + " \"chosen\": [],\n", + " \"rejected\": [],\n", + " \"rejected_mask\": [],\n", + " \"chosen_mask\": []\n", + "\n", + " }\n", + "\n", + " # Determine the longest sequence to set a common padding length\n", + " max_length_common = 0\n", + " if batch:\n", + " for key in [\"chosen\", \"rejected\"]:\n", + " current_max = max(len(item[key])+1 for item in batch)\n", + " max_length_common = max(max_length_common, current_max)\n", + "\n", + " # Process each item in the batch\n", + " for item in batch:\n", + " prompt = torch.tensor(item[\"prompt\"])\n", + " batch_data[\"prompt\"].append(prompt)\n", + "\n", + " for key in [\"chosen\", \"rejected\"]:\n", + " # Adjust padding according to the common maximum length\n", + " sequence = item[key]\n", + " padded = sequence + [pad_token_id] * (max_length_common - len(sequence))\n", + " mask = torch.ones(len(padded)).bool()\n", + "\n", + " # Set mask for all padding tokens to False\n", + " mask[len(sequence):] = False\n", + "\n", + " # Set mask for all input tokens to False\n", + " # +2 sets the 2 newline (\"\\n\") tokens before \"### Response\" to False\n", + " if mask_prompt_tokens:\n", + " mask[:prompt.shape[0]+2] = False\n", + "\n", + " batch_data[key].append(torch.tensor(padded))\n", + " batch_data[f\"{key}_mask\"].append(mask)\n", + "\n", + " # Final processing\n", + " for key in [\"chosen\", \"rejected\", \"chosen_mask\", \"rejected_mask\"]:\n", + " # Stack all sequences into a tensor for the given key\n", + " tensor_stack = torch.stack(batch_data[key])\n", + "\n", + " # Optionally truncate to maximum sequence length\n", + " if allowed_max_length is not None:\n", + " tensor_stack = tensor_stack[:, :allowed_max_length]\n", + "\n", + " # Move to the specified device\n", + " batch_data[key] = tensor_stack.to(device)\n", + "\n", + " return batch_data" + ] + }, + { + "cell_type": "markdown", + "id": "76f3744b-9bb0-4f1e-b66b-cff35ad8fd9f", + "metadata": { + "id": "76f3744b-9bb0-4f1e-b66b-cff35ad8fd9f" + }, + "source": [ + "- Before we start using the custom collate function, let's make version of it with some of its function arguments prefilled:" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "d3cc137c-7ed7-4758-a518-cc4071b2817a", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "d3cc137c-7ed7-4758-a518-cc4071b2817a", + "outputId": "598e9def-9768-441a-f886-01f6ba6e250b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Device: cuda\n" + ] + } + ], + "source": [ + "from functools import partial\n", + "\n", + "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", + "print(\"Device:\", device)\n", + "\n", + "customized_collate_fn = partial(\n", + " custom_collate_fn,\n", + " device=device, # Put the data directly on a GPU if available\n", + " mask_prompt_tokens=True, # This is optional\n", + " allowed_max_length=1024 # The supported context length of the model\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "5d29e996-e267-4348-bc1d-4ac6b725cf6a", + "metadata": { + "id": "5d29e996-e267-4348-bc1d-4ac6b725cf6a" + }, + "source": [ + "- Now, let's see the `customized_collate_fn` in action and apply it to some sample data from our preference dataset; for this, we take the first two entries:" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "1171057d-2a0f-48ff-bad6-4917a072f0f5", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "1171057d-2a0f-48ff-bad6-4917a072f0f5", + "outputId": "3db3eee8-db29-4ff6-8078-6577a05d953a" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "{'instruction': 'Evaluate the following phrase by transforming it into the '\n", + " 'spelling given.',\n", + " 'input': 'freind --> friend',\n", + " 'output': 'The spelling of the given phrase \"freind\" is incorrect, the '\n", + " 'correct spelling is \"friend\".',\n", + " 'rejected': 'The spelling of the given phrase \"freind\" is flat out wrong, get '\n", + " 'it together, the correct spelling is \"friend\".',\n", + " 'chosen': 'The spelling of the given phrase \"freind\" is incorrect, the '\n", + " 'correct spelling is \"friend\".'}\n", + "\n", + "{'instruction': 'Edit the following sentence for grammar.',\n", + " 'input': 'He go to the park every day.',\n", + " 'output': 'He goes to the park every day.',\n", + " 'rejected': 'He goes to the stupid park every single day.',\n", + " 'chosen': 'He goes to the park every day.'}\n" + ] + } + ], + "source": [ + "example_data = data[:2]\n", + "\n", + "for i in example_data:\n", + " print()\n", + " pprint.pp(i)" + ] + }, + { + "cell_type": "markdown", + "id": "8f1436cc-fbe5-4581-89d8-1992b5f04042", + "metadata": { + "id": "8f1436cc-fbe5-4581-89d8-1992b5f04042" + }, + "source": [ + "- Next, let's instantiate an `example_dataset` and use a PyTorch `DataLoader` to create an `example_dataloader` that mimics the data loader we will use for the model training later:" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "db327575-c34b-4fea-b3c7-e30569c9be78", + "metadata": { + "id": "db327575-c34b-4fea-b3c7-e30569c9be78" + }, + "outputs": [], + "source": [ + "import tiktoken\n", + "from torch.utils.data import DataLoader\n", + "\n", + "\n", + "tokenizer = tiktoken.get_encoding(\"gpt2\")\n", + "\n", + "example_dataset = PreferenceDataset(example_data, tokenizer)\n", + "\n", + "example_dataloader = DataLoader(\n", + " example_dataset,\n", + " batch_size=2,\n", + " collate_fn=customized_collate_fn,\n", + " shuffle=False\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "43a446b7-7037-4d9a-9f14-b4ee0f6f37af", + "metadata": { + "id": "43a446b7-7037-4d9a-9f14-b4ee0f6f37af" + }, + "source": [ + "- The dataset has the following keys:" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "87ed4cf9-d70a-4bc7-b676-67e76ed3ee10", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "87ed4cf9-d70a-4bc7-b676-67e76ed3ee10", + "outputId": "fa724d65-b0e1-4239-8090-9263135ad199" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "batch.keys: dict_keys(['prompt', 'chosen', 'rejected', 'rejected_mask', 'chosen_mask'])\n" + ] + } + ], + "source": [ + "for batch in example_dataloader:\n", + " break\n", + "\n", + "print(\"batch.keys:\", batch.keys())" + ] + }, + { + "cell_type": "markdown", + "id": "5bda3193-8c68-478c-98d8-0d9d880e7077", + "metadata": { + "id": "5bda3193-8c68-478c-98d8-0d9d880e7077" + }, + "source": [ + "- The prompts are a list of tensors, where each tensor contains the token IDs for a given example; since we selected a batch size of 2, we have two lists of token ID tensors here:" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "468995ce-2906-498f-ac99-0a3f80d13d12", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "468995ce-2906-498f-ac99-0a3f80d13d12", + "outputId": "7f3df961-fcb5-4e49-9b0c-c99447c67cc1" + }, + "outputs": [ + { + "data": { + "text/plain": [ + "[tensor([21106, 318, 281, 12064, 326, 8477, 257, 4876, 13, 19430,\n", + " 257, 2882, 326, 20431, 32543, 262, 2581, 13, 198, 198,\n", + " 21017, 46486, 25, 198, 36, 2100, 4985, 262, 1708, 9546,\n", + " 416, 25449, 340, 656, 262, 24993, 1813, 13, 198, 198,\n", + " 21017, 23412, 25, 198, 19503, 521, 14610, 1545]),\n", + " tensor([21106, 318, 281, 12064, 326, 8477, 257, 4876, 13, 19430,\n", + " 257, 2882, 326, 20431, 32543, 262, 2581, 13, 198, 198,\n", + " 21017, 46486, 25, 198, 18378, 262, 1708, 6827, 329, 23491,\n", + " 13, 198, 198, 21017, 23412, 25, 198, 1544, 467, 284,\n", + " 262, 3952, 790, 1110, 13])]" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "batch[\"prompt\"]" + ] + }, + { + "cell_type": "markdown", + "id": "89cadebe-2516-4ae0-a71f-a8a623f2e1da", + "metadata": { + "id": "89cadebe-2516-4ae0-a71f-a8a623f2e1da" + }, + "source": [ + "- We don't really need the responses for training; what we need to feed to the model during training are the `\"chosen\"` and `\"rejected\"` entries\n", + "- The `\"chosen\"` and `\"rejected\"` response entries are padded so that we can stack them as tensors; similar to the prompts, these response texts are encoded into token IDs:" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "e8f49c56-3989-4fe9-81ac-6bb3cce1a5b8", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "e8f49c56-3989-4fe9-81ac-6bb3cce1a5b8", + "outputId": "ccc0bd06-6e85-4ee9-893b-d985f26a835d" + }, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[21106, 318, 281, 12064, 326, 8477, 257, 4876, 13, 19430,\n", + " 257, 2882, 326, 20431, 32543, 262, 2581, 13, 198, 198,\n", + " 21017, 46486, 25, 198, 36, 2100, 4985, 262, 1708, 9546,\n", + " 416, 25449, 340, 656, 262, 24993, 1813, 13, 198, 198,\n", + " 21017, 23412, 25, 198, 19503, 521, 14610, 1545, 198, 198,\n", + " 21017, 18261, 25, 198, 464, 24993, 286, 262, 1813, 9546,\n", + " 366, 19503, 521, 1, 318, 11491, 11, 262, 3376, 24993,\n", + " 318, 366, 6726, 1911, 50256, 50256, 50256, 50256, 50256, 50256,\n", + " 50256],\n", + " [21106, 318, 281, 12064, 326, 8477, 257, 4876, 13, 19430,\n", + " 257, 2882, 326, 20431, 32543, 262, 2581, 13, 198, 198,\n", + " 21017, 46486, 25, 198, 18378, 262, 1708, 6827, 329, 23491,\n", + " 13, 198, 198, 21017, 23412, 25, 198, 1544, 467, 284,\n", + " 262, 3952, 790, 1110, 13, 198, 198, 21017, 18261, 25,\n", + " 198, 1544, 2925, 284, 262, 3952, 790, 1110, 13, 50256,\n", + " 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\n", + " 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\n", + " 50256]], device='cuda:0')" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "batch[\"chosen\"]" + ] + }, + { + "cell_type": "markdown", + "id": "35a4cd6d-b2ad-45a6-b00a-ba5b720be4ea", + "metadata": { + "id": "35a4cd6d-b2ad-45a6-b00a-ba5b720be4ea" + }, + "source": [ + "- The token IDs above represent the model inputs, but in this format, they are hard to interpret for us humans\n", + "- So, let's implement a small utility function to convert them back into text so that we can inspect and interpret them more easily:" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "52ea54ba-32cb-4ecb-b38b-923f42fd4615", + "metadata": { + "id": "52ea54ba-32cb-4ecb-b38b-923f42fd4615" + }, + "outputs": [], + "source": [ + "def decode_tokens_from_batch(token_ids, tokenizer):\n", + " ids_in_python_list = token_ids.flatten().tolist()\n", + " return tokenizer.decode(ids_in_python_list)" + ] + }, + { + "cell_type": "markdown", + "id": "bc9dd0ce-1fd4-419c-833f-ea5a1f8d800d", + "metadata": { + "id": "bc9dd0ce-1fd4-419c-833f-ea5a1f8d800d" + }, + "source": [ + "- Let's apply the `decode_tokens_from_batch` utility function to the first prompt entry in the batch:" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "55ee481e-3e2c-4ff6-b614-8cb18eb16a41", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "55ee481e-3e2c-4ff6-b614-8cb18eb16a41", + "outputId": "17ddec15-a09d-45b5-b1e8-600cd59a9600" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Evaluate the following phrase by transforming it into the spelling given.\n", + "\n", + "### Input:\n", + "freind --> friend\n" + ] + } + ], + "source": [ + "text = decode_tokens_from_batch(\n", + " token_ids=batch[\"prompt\"][0], # [0] for the first entry in the batch\n", + " tokenizer=tokenizer,\n", + ")\n", + "print(text)" + ] + }, + { + "cell_type": "markdown", + "id": "637b95c4-d5c2-4492-9d19-a45b090eee7e", + "metadata": { + "id": "637b95c4-d5c2-4492-9d19-a45b090eee7e" + }, + "source": [ + "- As we can see above, the prompt was correctly formatted; let's now do the same for the `\"chosen\"` response:" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "33a24f20-5ec3-4a89-b57a-52e997163d07", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "33a24f20-5ec3-4a89-b57a-52e997163d07", + "outputId": "e04366ee-3719-4b07-fcef-6e9dddc06310" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Evaluate the following phrase by transforming it into the spelling given.\n", + "\n", + "### Input:\n", + "freind --> friend\n", + "\n", + "### Response:\n", + "The spelling of the given phrase \"freind\" is incorrect, the correct spelling is \"friend\".<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>\n" + ] + } + ], + "source": [ + "text = decode_tokens_from_batch(\n", + " token_ids=batch[\"chosen\"][0],\n", + " tokenizer=tokenizer,\n", + ")\n", + "print(text)" + ] + }, + { + "cell_type": "markdown", + "id": "ac9fbdbd-1cff-401f-8e6c-cd98c134c0f2", + "metadata": { + "id": "ac9fbdbd-1cff-401f-8e6c-cd98c134c0f2" + }, + "source": [ + "- As we can see above, similar to instruction finetuning, the response that is passed to the model during training also contains the input prompt\n", + "- Also note that we included `<|endoftext|>` tokens as padding tokens, which are necessary so that we can extend the responses to a similar length to stack them as a batch\n", + "- Don't worry; the `<|endoftext|>` tokens will be ignored in the loss later so that they won't affect the training outcome\n", + "- Let's now also inspect the corresponding rejected response:" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "db382be5-c727-4299-8597-c05424ba9308", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "db382be5-c727-4299-8597-c05424ba9308", + "outputId": "edbd8c4a-0528-4361-aeba-9b3c3bbde33b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Evaluate the following phrase by transforming it into the spelling given.\n", + "\n", + "### Input:\n", + "freind --> friend\n", + "\n", + "### Response:\n", + "The spelling of the given phrase \"freind\" is flat out wrong, get it together, the correct spelling is \"friend\".<|endoftext|>\n" + ] + } + ], + "source": [ + "text = decode_tokens_from_batch(\n", + " token_ids=batch[\"rejected\"][0],\n", + " tokenizer=tokenizer,\n", + ")\n", + "print(text)" + ] + }, + { + "cell_type": "markdown", + "id": "715dc968-aa64-4388-b577-7c295831bdcf", + "metadata": { + "id": "715dc968-aa64-4388-b577-7c295831bdcf" + }, + "source": [ + "- In this case, as we can see above, the rejected response is a more impolite version of the chosen response (we don't want the model to generate impolite responses)\n", + "- Lastly, let's talk about the data masks: if you took a closer look at our custom collate function we implemented above, we created a `\"chosen_mask\"` and a `\"rejected_mask\"` for each dataset entry\n", + "- The masks have the same shape as the response entries, as shown below for the `\"chosen\"` entry:" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "5c324eab-cf1d-4071-b3ba-797d8ec4d1da", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "5c324eab-cf1d-4071-b3ba-797d8ec4d1da", + "outputId": "742a5742-1bc0-4f74-9eb9-cbf81f936ecb" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "chosen inputs: torch.Size([81])\n", + "chosen mask: torch.Size([81])\n" + ] + } + ], + "source": [ + "print(\"chosen inputs:\", batch[\"chosen\"][0].shape)\n", + "print(\"chosen mask: \", batch[\"chosen_mask\"][0].shape)" + ] + }, + { + "cell_type": "markdown", + "id": "880e95f7-cfc3-4f5f-be5e-c279fba5f674", + "metadata": { + "id": "880e95f7-cfc3-4f5f-be5e-c279fba5f674" + }, + "source": [ + "- The contents of these masks are boolean (`True` and `False`) values:" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "da75b550-5da4-4292-9a7e-a05b842bdcb7", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "da75b550-5da4-4292-9a7e-a05b842bdcb7", + "outputId": "e5f012c3-33ba-4e6b-aa55-3e331865218f" + }, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([False, False, False, False, False, False, False, False, False, False,\n", + " False, False, False, False, False, False, False, False, False, False,\n", + " False, False, False, False, False, False, False, False, False, False,\n", + " False, False, False, False, False, False, False, False, False, False,\n", + " False, False, False, False, False, False, False, False, False, False,\n", + " True, True, True, True, True, True, True, True, True, True,\n", + " True, True, True, True, True, True, True, True, True, True,\n", + " True, True, True, True, False, False, False, False, False, False,\n", + " False], device='cuda:0')" + ] + }, + "execution_count": 25, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "batch[\"chosen_mask\"][0]" + ] + }, + { + "cell_type": "markdown", + "id": "0e67b862-4430-4c99-9157-90955dde29b6", + "metadata": { + "id": "0e67b862-4430-4c99-9157-90955dde29b6" + }, + "source": [ + "- The `True` values denote token IDs that correspond to the actual response\n", + "- the `False` tokens correspond to token IDs that correspond to either prompt tokens (if we set `mask_prompt_tokens=True` in the `customized_collate_fn` function, which we previously did) or padding tokens\n", + "- Hence, we can use the mask as a selection mask to select only the token IDs that correspond to the response, that is, stripping all prompt and padding tokens, as we can see below:" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "1114c6fe-524b-401c-b9fe-02260e6f0541", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "1114c6fe-524b-401c-b9fe-02260e6f0541", + "outputId": "6d99af1d-940a-4012-c5d9-21d463a66e40" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "### Response:\n", + "The spelling of the given phrase \"freind\" is incorrect, the correct spelling is \"friend\".\n" + ] + } + ], + "source": [ + "text = decode_tokens_from_batch(\n", + " token_ids=batch[\"chosen\"][0][batch[\"chosen_mask\"][0]],\n", + " tokenizer=tokenizer,\n", + ")\n", + "print(text)" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "a89f83a4-d16e-40d2-ba43-bd410affd967", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "a89f83a4-d16e-40d2-ba43-bd410affd967", + "outputId": "1d439c7e-c079-4594-d02a-fa83a3cb275d" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "### Response:\n", + "The spelling of the given phrase \"freind\" is flat out wrong, get it together, the correct spelling is \"friend\".\n" + ] + } + ], + "source": [ + "text = decode_tokens_from_batch(\n", + " token_ids=batch[\"rejected\"][0][batch[\"rejected_mask\"][0]],\n", + " tokenizer=tokenizer,\n", + ")\n", + "print(text)" + ] + }, + { + "cell_type": "markdown", + "id": "e525287f-137c-4d71-94ae-cfd6db7b057c", + "metadata": { + "id": "e525287f-137c-4d71-94ae-cfd6db7b057c" + }, + "source": [ + "- We will make use of this mask to ignore prompt and padding tokens when computing the DPO loss later" + ] + }, + { + "cell_type": "markdown", + "id": "jbafhM_R8z5q", + "metadata": { + "id": "jbafhM_R8z5q" + }, + "source": [ + " \n", + "## 2.4) Creating training, validation, and test set data loaders" + ] + }, + { + "cell_type": "markdown", + "id": "b3c29eb8-d1b9-4abe-a155-52b3270d759a", + "metadata": { + "id": "b3c29eb8-d1b9-4abe-a155-52b3270d759a" + }, + "source": [ + "- Above, we worked with a small example subsets from the preference dataset for illustration purposes\n", + "- Let's now create the actual training, validation, and test set data loaders\n", + "- This process is identical to creating the data loaders in the pretraining and instruction finetuning chapters and thus should be self-explanatory" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "5c0068bf-bda0-4d9e-9f79-2fc4b94cbd1c", + "metadata": { + "id": "5c0068bf-bda0-4d9e-9f79-2fc4b94cbd1c" + }, + "outputs": [], + "source": [ + "from torch.utils.data import DataLoader\n", + "\n", + "\n", + "num_workers = 0\n", + "batch_size = 8\n", + "\n", + "torch.manual_seed(123)\n", + "\n", + "train_dataset = PreferenceDataset(train_data, tokenizer)\n", + "train_loader = DataLoader(\n", + " train_dataset,\n", + " batch_size=batch_size,\n", + " collate_fn=customized_collate_fn,\n", + " shuffle=True,\n", + " drop_last=True,\n", + " num_workers=num_workers\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "2f4a257b-6835-4194-abe2-5831d6a44885", + "metadata": { + "id": "2f4a257b-6835-4194-abe2-5831d6a44885" + }, + "outputs": [], + "source": [ + "val_dataset = PreferenceDataset(val_data, tokenizer)\n", + "val_loader = DataLoader(\n", + " val_dataset,\n", + " batch_size=batch_size,\n", + " collate_fn=customized_collate_fn,\n", + " shuffle=False,\n", + " drop_last=False,\n", + " num_workers=num_workers\n", + ")\n", + "\n", + "test_dataset = PreferenceDataset(test_data, tokenizer)\n", + "test_loader = DataLoader(\n", + " test_dataset,\n", + " batch_size=batch_size,\n", + " collate_fn=customized_collate_fn,\n", + " shuffle=False,\n", + " drop_last=False,\n", + " num_workers=num_workers\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "1fe1ba19-a6d5-4a77-8283-7a17d7ec06e2", + "metadata": { + "id": "1fe1ba19-a6d5-4a77-8283-7a17d7ec06e2" + }, + "source": [ + "- Let's iterate through the data loader and take a look at the dataset shapes:" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "80d61f15-facb-4eb8-a9be-6427887d24b2", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "80d61f15-facb-4eb8-a9be-6427887d24b2", + "outputId": "dacd3bdf-f069-4b36-da2c-d6c1c6cc5405" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train loader:\n", + "torch.Size([8, 77]) torch.Size([8, 77])\n", + "torch.Size([8, 81]) torch.Size([8, 81])\n", + "torch.Size([8, 94]) torch.Size([8, 94])\n", + "torch.Size([8, 75]) torch.Size([8, 75])\n", + "torch.Size([8, 75]) torch.Size([8, 75])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 99]) torch.Size([8, 99])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 67]) torch.Size([8, 67])\n", + "torch.Size([8, 88]) torch.Size([8, 88])\n", + "torch.Size([8, 65]) torch.Size([8, 65])\n", + "torch.Size([8, 79]) torch.Size([8, 79])\n", + "torch.Size([8, 80]) torch.Size([8, 80])\n", + "torch.Size([8, 97]) torch.Size([8, 97])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 89]) torch.Size([8, 89])\n", + "torch.Size([8, 75]) torch.Size([8, 75])\n", + "torch.Size([8, 69]) torch.Size([8, 69])\n", + "torch.Size([8, 84]) torch.Size([8, 84])\n", + "torch.Size([8, 79]) torch.Size([8, 79])\n", + "torch.Size([8, 101]) torch.Size([8, 101])\n", + "torch.Size([8, 87]) torch.Size([8, 87])\n", + "torch.Size([8, 73]) torch.Size([8, 73])\n", + "torch.Size([8, 69]) torch.Size([8, 69])\n", + "torch.Size([8, 80]) torch.Size([8, 80])\n", + "torch.Size([8, 68]) torch.Size([8, 68])\n", + "torch.Size([8, 73]) torch.Size([8, 73])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 91]) torch.Size([8, 91])\n", + "torch.Size([8, 78]) torch.Size([8, 78])\n", + "torch.Size([8, 78]) torch.Size([8, 78])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 84]) torch.Size([8, 84])\n", + "torch.Size([8, 92]) torch.Size([8, 92])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 66]) torch.Size([8, 66])\n", + "torch.Size([8, 73]) torch.Size([8, 73])\n", + "torch.Size([8, 73]) torch.Size([8, 73])\n", + "torch.Size([8, 78]) torch.Size([8, 78])\n", + "torch.Size([8, 66]) torch.Size([8, 66])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 100]) torch.Size([8, 100])\n", + "torch.Size([8, 77]) torch.Size([8, 77])\n", + "torch.Size([8, 92]) torch.Size([8, 92])\n", + "torch.Size([8, 93]) torch.Size([8, 93])\n", + "torch.Size([8, 115]) torch.Size([8, 115])\n", + "torch.Size([8, 81]) torch.Size([8, 81])\n", + "torch.Size([8, 95]) torch.Size([8, 95])\n", + "torch.Size([8, 81]) torch.Size([8, 81])\n", + "torch.Size([8, 94]) torch.Size([8, 94])\n", + "torch.Size([8, 70]) torch.Size([8, 70])\n", + "torch.Size([8, 89]) torch.Size([8, 89])\n", + "torch.Size([8, 90]) torch.Size([8, 90])\n", + "torch.Size([8, 70]) torch.Size([8, 70])\n", + "torch.Size([8, 85]) torch.Size([8, 85])\n", + "torch.Size([8, 65]) torch.Size([8, 65])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 72]) torch.Size([8, 72])\n", + "torch.Size([8, 84]) torch.Size([8, 84])\n", + "torch.Size([8, 84]) torch.Size([8, 84])\n", + "torch.Size([8, 65]) torch.Size([8, 65])\n", + "torch.Size([8, 63]) torch.Size([8, 63])\n", + "torch.Size([8, 74]) torch.Size([8, 74])\n", + "torch.Size([8, 79]) torch.Size([8, 79])\n", + "torch.Size([8, 93]) torch.Size([8, 93])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 99]) torch.Size([8, 99])\n", + "torch.Size([8, 81]) torch.Size([8, 81])\n", + "torch.Size([8, 77]) torch.Size([8, 77])\n", + "torch.Size([8, 74]) torch.Size([8, 74])\n", + "torch.Size([8, 75]) torch.Size([8, 75])\n", + "torch.Size([8, 73]) torch.Size([8, 73])\n", + "torch.Size([8, 87]) torch.Size([8, 87])\n", + "torch.Size([8, 80]) torch.Size([8, 80])\n", + "torch.Size([8, 75]) torch.Size([8, 75])\n", + "torch.Size([8, 81]) torch.Size([8, 81])\n", + "torch.Size([8, 86]) torch.Size([8, 86])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 63]) torch.Size([8, 63])\n", + "torch.Size([8, 82]) torch.Size([8, 82])\n", + "torch.Size([8, 68]) torch.Size([8, 68])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 68]) torch.Size([8, 68])\n", + "torch.Size([8, 97]) torch.Size([8, 97])\n", + "torch.Size([8, 72]) torch.Size([8, 72])\n", + "torch.Size([8, 85]) torch.Size([8, 85])\n", + "torch.Size([8, 67]) torch.Size([8, 67])\n", + "torch.Size([8, 85]) torch.Size([8, 85])\n", + "torch.Size([8, 87]) torch.Size([8, 87])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 74]) torch.Size([8, 74])\n", + "torch.Size([8, 92]) torch.Size([8, 92])\n", + "torch.Size([8, 85]) torch.Size([8, 85])\n", + "torch.Size([8, 72]) torch.Size([8, 72])\n", + "torch.Size([8, 93]) torch.Size([8, 93])\n", + "torch.Size([8, 82]) torch.Size([8, 82])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 93]) torch.Size([8, 93])\n", + "torch.Size([8, 80]) torch.Size([8, 80])\n", + "torch.Size([8, 87]) torch.Size([8, 87])\n", + "torch.Size([8, 69]) torch.Size([8, 69])\n", + "torch.Size([8, 90]) torch.Size([8, 90])\n", + "torch.Size([8, 99]) torch.Size([8, 99])\n", + "torch.Size([8, 104]) torch.Size([8, 104])\n", + "torch.Size([8, 101]) torch.Size([8, 101])\n", + "torch.Size([8, 98]) torch.Size([8, 98])\n", + "torch.Size([8, 79]) torch.Size([8, 79])\n", + "torch.Size([8, 71]) torch.Size([8, 71])\n", + "torch.Size([8, 76]) torch.Size([8, 76])\n", + "torch.Size([8, 79]) torch.Size([8, 79])\n", + "torch.Size([8, 79]) torch.Size([8, 79])\n", + "torch.Size([8, 67]) torch.Size([8, 67])\n", + "torch.Size([8, 84]) torch.Size([8, 84])\n", + "torch.Size([8, 78]) torch.Size([8, 78])\n", + "torch.Size([8, 85]) torch.Size([8, 85])\n", + "torch.Size([8, 70]) torch.Size([8, 70])\n" + ] + } + ], + "source": [ + "print(\"Train loader:\")\n", + "for batch in train_loader:\n", + " print(\n", + " batch[\"chosen\"].shape,\n", + " batch[\"rejected\"].shape,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "7ff958a6-5e61-49f5-9a97-360aa34e3758", + "metadata": { + "id": "7ff958a6-5e61-49f5-9a97-360aa34e3758" + }, + "source": [ + "- Each row shows the shape of the `\"chosen\"` and `\"rejected\"` entries in each batch\n", + "- Since we applied padding on a batch-by-batch basis, each row has a different shape\n", + "- This is for efficiency reasons because it would be inefficient to pad all samples to the longest sample in the whole dataset" + ] + }, + { + "cell_type": "markdown", + "id": "29cb0543-1142-4374-8825-3384e20c6ac0", + "metadata": { + "id": "29cb0543-1142-4374-8825-3384e20c6ac0" + }, + "source": [ + " \n", + "# 3) Loading a finetuned LLM for DPO alignment" + ] + }, + { + "cell_type": "markdown", + "id": "22b08881-b769-4b26-8153-5ec0e8573ed2", + "metadata": { + "id": "22b08881-b769-4b26-8153-5ec0e8573ed2" + }, + "source": [ + "- LLM alignment steps, such as RLHF or DPO, assume that we already have an instruction-finetuned model\n", + "- This section contains minimal code to load the model that was instruction finetuned and saved in chapter 7 (via [../01_main-chapter-code/ch07.ipynb](../01_main-chapter-code/ch07.ipynb))\n", + "- Make sure you run the chapter 7 code first to create the instruction-finetuned model before you proceed\n", + "- The code below will copy the instruction-finetuned model into the current directory:" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "b3c6d82b-63f7-459a-b901-7125ab225e56", + "metadata": { + "id": "b3c6d82b-63f7-459a-b901-7125ab225e56" + }, + "outputs": [], + "source": [ + "import os\n", + "from pathlib import Path\n", + "import shutil\n", + "\n", + "\n", + "finetuned_model_path = Path(\"gpt2-medium355M-sft.pth\")\n", + "if not finetuned_model_path.exists():\n", + "\n", + " # Try finding the model checkpoint locally:\n", + " relative_path = Path(\"..\") / \"01_main-chapter-code\" / finetuned_model_path\n", + " if relative_path.exists():\n", + " shutil.copy(relative_path, \".\")\n", + "\n", + " # If this notebook is run on Google Colab, get it from a Google Drive folder\n", + " elif \"COLAB_GPU\" in os.environ or \"COLAB_TPU_ADDR\" in os.environ:\n", + " from google.colab import drive\n", + " drive.mount(\"/content/drive\")\n", + " google_drive_path = \"/content/drive/My Drive/Books/LLMs-From-Scratch/ch07/colab/gpt2-medium355M-sft.pth\" # Readers need to adjust this path\n", + " shutil.copy(google_drive_path, \".\")\n", + "\n", + " else:\n", + " print(\n", + " f\"Could not find '{finetuned_model_path}'.\\n\"\n", + " \"Run the `ch07.ipynb` notebook to finetune and save the finetuned model.\"\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "71c8585e-4569-4033-84a7-3903d0e8aaf8", + "metadata": { + "id": "71c8585e-4569-4033-84a7-3903d0e8aaf8" + }, + "source": [ + "- Next, we reuse the basic configuration from previous chapters to load the model weights:" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "a8333fee-e7fe-4f8c-9411-8c1db6252d98", + "metadata": { + "id": "a8333fee-e7fe-4f8c-9411-8c1db6252d98" + }, + "outputs": [], + "source": [ + "from previous_chapters import GPTModel\n", + "\n", + "\n", + "BASE_CONFIG = {\n", + " \"vocab_size\": 50257, # Vocabulary size\n", + " \"context_length\": 1024, # Context length\n", + " \"drop_rate\": 0.0, # Dropout rate\n", + " \"qkv_bias\": True # Query-key-value bias\n", + "}\n", + "\n", + "model_configs = {\n", + " \"gpt2-small (124M)\": {\"emb_dim\": 768, \"n_layers\": 12, \"n_heads\": 12},\n", + " \"gpt2-medium (355M)\": {\"emb_dim\": 1024, \"n_layers\": 24, \"n_heads\": 16},\n", + " \"gpt2-large (774M)\": {\"emb_dim\": 1280, \"n_layers\": 36, \"n_heads\": 20},\n", + " \"gpt2-xl (1558M)\": {\"emb_dim\": 1600, \"n_layers\": 48, \"n_heads\": 25},\n", + "}\n", + "\n", + "CHOOSE_MODEL = \"gpt2-medium (355M)\"\n", + "\n", + "BASE_CONFIG.update(model_configs[CHOOSE_MODEL])\n", + "\n", + "model = GPTModel(BASE_CONFIG)" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "id": "c2821403-605c-4071-a4ff-e23f4c9a11fd", + "metadata": { + "id": "c2821403-605c-4071-a4ff-e23f4c9a11fd" + }, + "outputs": [], + "source": [ + "model.load_state_dict(\n", + " torch.load(\n", + " \"gpt2-medium355M-sft.pth\",\n", + " map_location=torch.device(\"cpu\"),\n", + " weights_only=True\n", + " )\n", + ")\n", + "model.eval();" + ] + }, + { + "cell_type": "markdown", + "id": "61863bec-bd42-4194-b994-645bfe2df8be", + "metadata": { + "id": "61863bec-bd42-4194-b994-645bfe2df8be" + }, + "source": [ + "- Before training the loaded model with DPO, let's make sure that the finetuned model was saved and loaded correctly by trying it out on some sample data:" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "id": "4357aec5-0db2-4d73-b37b-539cd8fa80a3", + "metadata": { + "id": "4357aec5-0db2-4d73-b37b-539cd8fa80a3" + }, + "outputs": [], + "source": [ + "prompt = \"\"\"Below is an instruction that describes a task. Write a response\n", + "that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Convert the active sentence to passive: 'The chef cooks the meal every day.'\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "id": "541e7988-38d3-47f6-bd52-9da6564479fa", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "541e7988-38d3-47f6-bd52-9da6564479fa", + "outputId": "278f7ddf-37c2-4c3a-d069-c510ef6f8d7a" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response\n", + "that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Convert the active sentence to passive: 'The chef cooks the meal every day.'\n", + "\n", + "### Response:\n", + "The meal is cooked every day by the chef.\n" + ] + } + ], + "source": [ + "from previous_chapters import (\n", + " generate,\n", + " text_to_token_ids,\n", + " token_ids_to_text\n", + ")\n", + "\n", + "torch.manual_seed(123)\n", + "\n", + "token_ids = generate(\n", + " model=model,\n", + " idx=text_to_token_ids(prompt, tokenizer),\n", + " max_new_tokens=35,\n", + " context_size=BASE_CONFIG[\"context_length\"],\n", + " eos_id=50256\n", + ")\n", + "\n", + "response = token_ids_to_text(token_ids, tokenizer)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "be87ed19-fded-4e56-8585-6c7c0367b354", + "metadata": { + "id": "be87ed19-fded-4e56-8585-6c7c0367b354" + }, + "source": [ + "- As we can see above, the model gives a reasonable and correct response\n", + "- As explained in chapter 7, in practice, we would clean up the response to only return the response text with the prompt and prompt style removed (similar to what you are familiar with from ChatGPT, for example):" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "id": "0c30c4e2-af84-4ab4-95d0-9641e32c1e7f", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "0c30c4e2-af84-4ab4-95d0-9641e32c1e7f", + "outputId": "70192bbe-fdf6-43eb-c673-f573f8c70156" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The meal is cooked every day by the chef.\n" + ] + } + ], + "source": [ + "def extract_response(response_text, input_text):\n", + " return response_text[len(input_text):].replace(\"### Response:\", \"\").strip()\n", + "\n", + "response = extract_response(response, prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "80442cb9-83b1-46b8-bad0-7d44297ca52d", + "metadata": { + "id": "80442cb9-83b1-46b8-bad0-7d44297ca52d" + }, + "source": [ + "- Now, we are almost ready to get to the DPO part\n", + "- As mentioned at the beginning of this notebook, DPO works with two LLMs: a policy model (the LLM that we want to optimize) and a reference model (the original model that we keep unchanged)\n", + "- Below, we rename the `model` as `policy_model` and instantiate a second instance of the model we refer to as the `reference_model`" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "id": "5d88cc3a-312e-4b29-bc6d-de8354c1eb9f", + "metadata": { + "id": "5d88cc3a-312e-4b29-bc6d-de8354c1eb9f" + }, + "outputs": [], + "source": [ + "policy_model = model\n", + "\n", + "reference_model = GPTModel(BASE_CONFIG)\n", + "reference_model.load_state_dict(\n", + " torch.load(\n", + " \"gpt2-medium355M-sft.pth\",\n", + " map_location=torch.device(\"cpu\"),\n", + " weights_only=True\n", + " )\n", + ")\n", + "reference_model.eval()\n", + "\n", + "policy_model.to(device)\n", + "reference_model.to(device);" + ] + }, + { + "cell_type": "markdown", + "id": "9c6c1469-0038-4914-8aa5-15b1f81877cc", + "metadata": { + "id": "9c6c1469-0038-4914-8aa5-15b1f81877cc" + }, + "source": [ + " \n", + "# 4) Coding the DPO Loss Function" + ] + }, + { + "cell_type": "markdown", + "id": "75dbe60c-e4ce-413e-beec-22eff0237d11", + "metadata": { + "id": "75dbe60c-e4ce-413e-beec-22eff0237d11" + }, + "source": [ + "- After we took care of the model loading and dataset preparation in the previous sections, we can now get to the fun part and code the DPO loss\n", + "- Note that the DPO loss code below is based on the method proposed in the [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://arxiv.org/abs/2305.18290) paper\n", + "- For reference, the core DPO equation is shown again below:\n", + "\n", + "\n", + "\n", + "- In the equation above,\n", + " - \"expected value\" $\\mathbb{E}$ is statistics jargon and stands for the average or mean value of the random variable (the expression inside the brackets)\n", + " - The $\\pi_{\\theta}$ variable is the so-called policy (a term borrowed from reinforcement learning) and represents the LLM we want to optimize; $\\pi_{ref}$ is a reference LLM, which is typically the original LLM before optimization (at the beginning of the training, $\\pi_{\\theta}$ and $\\pi_{ref}$ are typically the same)\n", + " - $\\beta$ is a hyperparameter to control the divergence between the $\\pi_{\\theta}$ and the reference model; increasing $\\beta$ increases the impact of the difference between\n", + "$\\pi_{\\theta}$ and $\\pi_{ref}$ in terms of their log probabilities on the overall loss function, thereby increasing the divergence between the two models\n", + "- In code, we can implement the DPO loss as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "id": "38CsrrwJIZiV", + "metadata": { + "id": "38CsrrwJIZiV" + }, + "outputs": [], + "source": [ + "import torch.nn.functional as F\n", + "\n", + "def compute_dpo_loss(\n", + " model_chosen_logprobs,\n", + " model_rejected_logprobs,\n", + " reference_chosen_logprobs,\n", + " reference_rejected_logprobs,\n", + " beta=0.1,\n", + " ):\n", + " \"\"\"Compute the DPO loss for a batch of policy and reference model log probabilities.\n", + "\n", + " Args:\n", + " policy_chosen_logprobs: Log probabilities of the policy model for the chosen responses. Shape: (batch_size,)\n", + " policy_rejected_logprobs: Log probabilities of the policy model for the rejected responses. Shape: (batch_size,)\n", + " reference_chosen_logprobs: Log probabilities of the reference model for the chosen responses. Shape: (batch_size,)\n", + " reference_rejected_logprobs: Log probabilities of the reference model for the rejected responses. Shape: (batch_size,)\n", + " beta: Temperature parameter for the DPO loss; typically something in the range of 0.1 to 0.5. We ignore the reference model as beta -> 0.\n", + " label_smoothing: conservativeness for DPO loss.\n", + "\n", + " Returns:\n", + " A tuple of three tensors: (loss, chosen_rewards, rejected_rewards).\n", + " \"\"\"\n", + "\n", + " model_logratios = model_chosen_logprobs - model_rejected_logprobs\n", + " reference_logratios = reference_chosen_logprobs - reference_rejected_logprobs\n", + " logits = model_logratios - reference_logratios\n", + "\n", + " # DPO (Eq. 7 of https://arxiv.org/pdf/2305.18290.pdf)\n", + " losses = -F.logsigmoid(beta * logits)\n", + "\n", + " # Optional values to track progress during training\n", + " chosen_rewards = (model_chosen_logprobs - reference_chosen_logprobs).detach()\n", + " rejected_rewards = (model_rejected_logprobs - reference_rejected_logprobs).detach()\n", + "\n", + " # .mean() to average over the samples in the batch\n", + " return losses.mean(), chosen_rewards.mean(), rejected_rewards.mean()" + ] + }, + { + "cell_type": "markdown", + "id": "693be65b-38fc-4d18-bf53-a260a15436e1", + "metadata": { + "id": "693be65b-38fc-4d18-bf53-a260a15436e1" + }, + "source": [ + "- If you are familiar with logarithms, note that we have the general relationship $\\log\\left(\\frac{a}{b}\\right) = \\log a - \\log b$, which we applied in the code above\n", + "- Keeping this in mind, let's go through some of the steps (we will calculate the `logprobs` using a separate function later)\n", + "- Let's start with the lines\n", + "\n", + " ```python\n", + " model_logratios = model_chosen_logprobs - model_rejected_logprobs\n", + " reference_logratios = reference_chosen_logprobs - reference_rejected_logprobs\n", + " ```\n", + "\n", + "- These lines above calculate the difference in log probabilities (logits) for the chosen and rejected samples for both the policy model and the reference model (this is due to $\\log\\left(\\frac{a}{b}\\right) = \\log a - \\log b$):\n", + "\n", + "$$\\log \\left( \\frac{\\pi_\\theta (y_w \\mid x)}{\\pi_\\theta (y_l \\mid x)} \\right) \\quad \\text{and} \\quad \\log \\left( \\frac{\\pi_{\\text{ref}}(y_w \\mid x)}{\\pi_{\\text{ref}}(y_l \\mid x)} \\right)$$" + ] + }, + { + "cell_type": "markdown", + "id": "5458d217-e0ad-40a5-925c-507a8fcf5795", + "metadata": { + "id": "5458d217-e0ad-40a5-925c-507a8fcf5795" + }, + "source": [ + "- Next, the code `logits = model_logratios - reference_logratios` computes the difference between the model's log ratios and the reference model's log ratios, i.e., \n", + "\n", + "$$\\beta \\log \\left( \\frac{\\pi_\\theta (y_w \\mid x)}{\\pi_{\\text{ref}} (y_w \\mid x)} \\right)\n", + "- \\beta \\log \\left( \\frac{\\pi_\\theta (y_l \\mid x)}{\\pi_{\\text{ref}} (y_l \\mid x)} \\right)$$\n" + ] + }, + { + "cell_type": "markdown", + "id": "f18e3e36-f5f1-407f-b662-4c20a0ac0354", + "metadata": { + "id": "f18e3e36-f5f1-407f-b662-4c20a0ac0354" + }, + "source": [ + "- Finally, `losses = -F.logsigmoid(beta * logits)` calculates the loss using the log-sigmoid function; in the original equation, the term inside the expectation is \n", + "\n", + "$$\\log \\sigma \\left( \\beta \\log \\left( \\frac{\\pi_\\theta (y_w \\mid x)}{\\pi_{\\text{ref}} (y_w \\mid x)} \\right)\n", + "- \\beta \\log \\left( \\frac{\\pi_\\theta (y_l \\mid x)}{\\pi_{\\text{ref}} (y_l \\mid x)} \\right) \\right)$$" + ] + }, + { + "cell_type": "markdown", + "id": "00a6f92d-7d64-41fe-bcaa-2bddd46027e1", + "metadata": { + "id": "00a6f92d-7d64-41fe-bcaa-2bddd46027e1" + }, + "source": [ + "- Above, we assumed that the log probabilities were already computed; let's now define a `compute_logprobs` function that we can use to compute these log probabilities that were passed into the `compute_dpo_loss` function above, that is, the values $\\pi_\\theta (y_w \\mid x)$, ${\\pi_\\theta (y_l \\mid x)}$, and so forth:" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "id": "71e6507b-d2e2-4469-86b9-f057b08b5df9", + "metadata": { + "id": "71e6507b-d2e2-4469-86b9-f057b08b5df9" + }, + "outputs": [], + "source": [ + "def compute_logprobs(logits, labels, selection_mask=None):\n", + " \"\"\"\n", + " Compute log probabilities.\n", + "\n", + " Args:\n", + " logits: Tensor of shape (batch_size, num_tokens, vocab_size)\n", + " labels: Tensor of shape (batch_size, num_tokens)\n", + " selection_mask: Tensor for shape (batch_size, num_tokens)\n", + "\n", + " Returns:\n", + " mean_log_prob: Mean log probability excluding padding tokens.\n", + " \"\"\"\n", + "\n", + " # Labels are the inputs shifted by one\n", + " labels = labels[:, 1:].clone()\n", + "\n", + " # Truncate logits to match the labels num_tokens\n", + " logits = logits[:, :-1, :]\n", + "\n", + " log_probs = F.log_softmax(logits, dim=-1)\n", + "\n", + " # Gather the log probabilities for the actual labels\n", + " selected_log_probs = torch.gather(\n", + " input=log_probs,\n", + " dim=-1,\n", + " index=labels.unsqueeze(-1)\n", + " ).squeeze(-1)\n", + "\n", + " if selection_mask is not None:\n", + " mask = selection_mask[:, 1:].clone()\n", + "\n", + " # Apply the mask to filter out padding tokens\n", + " selected_log_probs = selected_log_probs * mask\n", + "\n", + " # Calculate the average log probability excluding padding tokens\n", + " # This averages over the tokens, so the shape is (batch_size, num_tokens)\n", + " avg_log_prob = selected_log_probs.sum(-1) / mask.sum(-1)\n", + "\n", + " return avg_log_prob\n", + "\n", + " else:\n", + " return selected_log_probs.mean(-1)" + ] + }, + { + "cell_type": "markdown", + "id": "cf6a71ac-3fcc-44a4-befc-1c56bbd378d7", + "metadata": { + "id": "cf6a71ac-3fcc-44a4-befc-1c56bbd378d7" + }, + "source": [ + "- Note that this function above might look a bit intimidating at first due to the `torch.gather` function, but it's pretty similar to what happens under the hood in PyTorch's `cross_entropy` function\n", + "- For example, consider the following example:" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "id": "59873470-464d-4be2-860f-cbb7ac2d80ba", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "59873470-464d-4be2-860f-cbb7ac2d80ba", + "outputId": "8f7b47d4-73fe-4605-c17d-ad6cfd909a9b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor(1.4185) tensor(1.4185)\n" + ] + } + ], + "source": [ + "# Sample data\n", + "logits = torch.tensor(\n", + " [[2.0, 1.0, 0.1],\n", + " [0.5, 2.5, 0.3]]) # Shape: (2, 3)\n", + "targets = torch.tensor([0, 2]) # Shape: (2,)\n", + "\n", + "\n", + "# Manual loss using torch.gather\n", + "log_softmax_logits = F.log_softmax(logits, dim=1) # Shape: (2, 3)\n", + "selected_log_probs = torch.gather(\n", + " input=log_softmax_logits,\n", + " dim=1,\n", + " index=targets.unsqueeze(1), # Shape 2, 1\n", + ").squeeze(1) # Shape: (2,)\n", + "manual_loss = -selected_log_probs.mean() # Averaging over the batch\n", + "\n", + "\n", + "# PyTorch loss\n", + "cross_entropy_loss = F.cross_entropy(logits, targets)\n", + "\n", + "print(manual_loss, cross_entropy_loss)" + ] + }, + { + "cell_type": "markdown", + "id": "f86d7add-f7ff-4a87-9193-7878c42bf0e7", + "metadata": { + "id": "f86d7add-f7ff-4a87-9193-7878c42bf0e7" + }, + "source": [ + "- So, above, we can see that the two implementations are equivalent, but let's narrow down a bit further to the `torch.gather` mechanics\n", + "- Consider the following two tensors:" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "id": "508db6ba-cc40-479f-a996-2250cf862388", + "metadata": { + "id": "508db6ba-cc40-479f-a996-2250cf862388" + }, + "outputs": [], + "source": [ + "t = torch.tensor(\n", + " [[1., 2.,],\n", + " [3., 4.]]\n", + ")\n", + "\n", + "m = torch.tensor(\n", + " [[1, 1],\n", + " [0, 1]]\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "821cbf45-8fbb-47b7-bae8-6c3271e36979", + "metadata": { + "id": "821cbf45-8fbb-47b7-bae8-6c3271e36979" + }, + "source": [ + "- Above, `t` is a tensor we want to select from, and `m` is a mask to specify how we want to select\n", + " - For instance, since `m` contains `[1, 1]` n the first row, it will select two times the value of `t` in index position `1`, which is the value 2.\n", + " - The second row of `m`, `[0, 1]`, selects index positions 0 and 1 in the second row or `t`, which are `3.` and `4.`" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "id": "4fdN5q1YPAbM", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "4fdN5q1YPAbM", + "outputId": "e935e8ad-1519-4c4b-dbff-65adae0a15a4" + }, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[2., 2.],\n", + " [3., 4.]])" + ] + }, + "execution_count": 42, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.gather(input=t, dim=-1, index=m)" + ] + }, + { + "cell_type": "markdown", + "id": "d10eeaf4-f24b-4e79-916a-abedf74fe4a3", + "metadata": { + "id": "d10eeaf4-f24b-4e79-916a-abedf74fe4a3" + }, + "source": [ + "- In other words, `torch.gather` is a selection function\n", + "- When we computed the loss earlier, we used it to retrieve the log probabilities corresponding to the correct token in the 50,256-token vocabulary\n", + "- The \"correct\" tokens are the tokens given in the response entry" + ] + }, + { + "cell_type": "markdown", + "id": "d5d10a43-ee5b-47ed-9d55-ddd96e66cf0b", + "metadata": { + "id": "d5d10a43-ee5b-47ed-9d55-ddd96e66cf0b" + }, + "source": [ + "- Regarding the `compute_logprobs` function above, we use `torch.gather` here because it gives us a bit more control than `cross_entropy`, but is, in essence, a similar idea\n", + "- The `selection_mask` we use there is to optionally ignore prompt and padding tokens\n", + "- We can then use the `compute_logprobs` function as follows to compute the inputs for the `compute_dpo_loss` loss function" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "id": "dfa7a4db-eba0-47d8-ad6d-7b5e7676e318", + "metadata": { + "id": "dfa7a4db-eba0-47d8-ad6d-7b5e7676e318" + }, + "outputs": [], + "source": [ + "def compute_dpo_loss_batch(batch, policy_model, reference_model, beta):\n", + " \"\"\"Compute the DPO loss on an input batch\"\"\"\n", + "\n", + " # where policy_model(batch[\"chosen\"]) are the logits\n", + " policy_chosen_log_probas = compute_logprobs(\n", + " logits=policy_model(batch[\"chosen\"]),\n", + " labels=batch[\"chosen\"],\n", + " selection_mask=batch[\"chosen_mask\"]\n", + " )\n", + " policy_rejected_log_probas = compute_logprobs(\n", + " logits=policy_model(batch[\"rejected\"]),\n", + " labels=batch[\"rejected\"],\n", + " selection_mask=batch[\"rejected_mask\"]\n", + " )\n", + " ref_chosen_log_probas = compute_logprobs(\n", + " logits=reference_model(batch[\"chosen\"]),\n", + " labels=batch[\"chosen\"],\n", + " selection_mask=batch[\"chosen_mask\"]\n", + " )\n", + " ref_rejected_log_probas = compute_logprobs(\n", + " logits=reference_model(batch[\"rejected\"]),\n", + " labels=batch[\"rejected\"],\n", + " selection_mask=batch[\"rejected_mask\"]\n", + " )\n", + " loss, chosen_rewards, rejected_rewards = compute_dpo_loss(\n", + " model_chosen_logprobs=policy_chosen_log_probas,\n", + " model_rejected_logprobs=policy_rejected_log_probas,\n", + " reference_chosen_logprobs=ref_chosen_log_probas,\n", + " reference_rejected_logprobs=ref_rejected_log_probas,\n", + " beta=beta\n", + " )\n", + " return loss, chosen_rewards, rejected_rewards" + ] + }, + { + "cell_type": "markdown", + "id": "b28caafb-f378-4332-a142-3e0f9ef67fbb", + "metadata": { + "id": "b28caafb-f378-4332-a142-3e0f9ef67fbb" + }, + "source": [ + "- The above function works for a single batch, for example:" + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "id": "dd74fcc4-4280-41e9-9a22-838e85c84ee4", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "dd74fcc4-4280-41e9-9a22-838e85c84ee4", + "outputId": "65a70828-7dd2-4f72-ffec-45aeaf8afad0" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(tensor(0.6931, device='cuda:0'), tensor(0., device='cuda:0'), tensor(0., device='cuda:0'))\n" + ] + } + ], + "source": [ + "with torch.no_grad():\n", + " loss = compute_dpo_loss_batch(batch, policy_model, reference_model, beta=0.1)\n", + "print(loss)" + ] + }, + { + "cell_type": "markdown", + "id": "b17429cd-2a00-41c8-9f16-38b1c9a5179f", + "metadata": { + "id": "b17429cd-2a00-41c8-9f16-38b1c9a5179f" + }, + "source": [ + "- Below, we extend this function to work for a specified `num_batches` in a data loader:" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "id": "682e9ad5-c5de-4d1b-9e93-3918bf5d5302", + "metadata": { + "id": "682e9ad5-c5de-4d1b-9e93-3918bf5d5302" + }, + "outputs": [], + "source": [ + "def compute_dpo_loss_loader(data_loader, policy_model, reference_model, beta, num_batches=None):\n", + " \"\"\"Apply compute_dpo_loss_batch to a whole data loader\"\"\"\n", + "\n", + " total_loss, total_chosen_rewards, total_rejected_rewards = 0., 0., 0.\n", + " if len(data_loader) == 0:\n", + " return float(\"nan\")\n", + "\n", + " elif num_batches is None:\n", + " num_batches = len(data_loader)\n", + " else:\n", + " # Reduce the number of batches to match the total number of batches in the data loader\n", + " # if num_batches exceeds the number of batches in the data loader\n", + " num_batches = min(num_batches, len(data_loader))\n", + " for i, batch in enumerate(data_loader):\n", + " if i < num_batches:\n", + " loss, chosen_rewards, rejected_rewards = compute_dpo_loss_batch(\n", + " batch=batch,\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " beta=beta\n", + " )\n", + " total_loss += loss.item()\n", + " total_chosen_rewards += chosen_rewards.item()\n", + " total_rejected_rewards += rejected_rewards.item()\n", + "\n", + " else:\n", + " break\n", + "\n", + " # calculate average\n", + " total_loss /= num_batches\n", + " total_chosen_rewards /= num_batches\n", + " total_rejected_rewards /= num_batches\n", + " return total_loss, total_chosen_rewards, total_rejected_rewards" + ] + }, + { + "cell_type": "markdown", + "id": "852e4c09-d285-44d5-be12-d29769950cb6", + "metadata": { + "id": "852e4c09-d285-44d5-be12-d29769950cb6" + }, + "source": [ + "- Why a specified `num_batches`? That's purely for efficiency reasons (because calculating the loss on the whole dataset each time would slow down the training significantly)" + ] + }, + { + "cell_type": "markdown", + "id": "2cca95b7-18fe-4076-9138-f70f21607b8c", + "metadata": { + "id": "2cca95b7-18fe-4076-9138-f70f21607b8c" + }, + "source": [ + "- Lastly, we define a convenience function for our training function later; this `evaluate_dpo_loss_loader` function computes the DPO loss and rewards for both the training and validation loader for logging purposes:" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "id": "c3d214ec-49ba-4bf0-ac80-f90fa0d832e9", + "metadata": { + "id": "c3d214ec-49ba-4bf0-ac80-f90fa0d832e9" + }, + "outputs": [], + "source": [ + "def evaluate_dpo_loss_loader(policy_model, reference_model, train_loader, val_loader, beta, eval_iter):\n", + " \"\"\"Compute the DPO loss for the training and validation dataset\"\"\"\n", + "\n", + " policy_model.eval()\n", + " with torch.no_grad():\n", + " train_loss, train_chosen_rewards, train_rejected_rewards = compute_dpo_loss_loader(\n", + " data_loader=train_loader,\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " beta=beta,\n", + " num_batches=eval_iter\n", + " )\n", + "\n", + " val_loss, val_chosen_rewards, val_rejected_rewards = compute_dpo_loss_loader(\n", + " data_loader=val_loader,\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " beta=beta,\n", + " num_batches=eval_iter\n", + " )\n", + "\n", + " res = {\n", + " \"train_loss\": train_loss,\n", + " \"train_chosen_reward\": train_chosen_rewards,\n", + " \"train_rejected_reward\": train_rejected_rewards,\n", + " \"val_loss\": val_loss,\n", + " \"val_chosen_reward\": val_chosen_rewards,\n", + " \"val_rejected_reward\": val_rejected_rewards\n", + " }\n", + "\n", + " policy_model.train()\n", + " return res" + ] + }, + { + "cell_type": "markdown", + "id": "6e95ed92-6743-4f13-8b91-0fbf2e540de1", + "metadata": { + "id": "6e95ed92-6743-4f13-8b91-0fbf2e540de1" + }, + "source": [ + "- In this section, we covered a lot of ground as a brief recap:\n", + " - The flow is: compute `logits` via the models $\\rightarrow$ `compute_logprobs` from logits $\\rightarrow$ compute `compute_dpo_loss` from log probabilities\n", + " - we have the `compute_dpo_loss_batch` function that facilitates the process above\n", + " - the `compute_dpo_loss_loader` utility function applies the `compute_dpo_loss_batch` function to a data loader\n", + " - the `evaluate_dpo_loss_loader` function applies the `compute_dpo_loss_batch` to both the training and validation set data loaders for logging purposes" + ] + }, + { + "cell_type": "markdown", + "id": "cb8a8f18-536e-4d83-a0d0-ac518a85f157", + "metadata": { + "id": "cb8a8f18-536e-4d83-a0d0-ac518a85f157" + }, + "source": [ + " \n", + "# 5) Training the model" + ] + }, + { + "cell_type": "markdown", + "id": "4b11d63d-3ddc-4070-9b2b-5ca0edb08d0c", + "metadata": { + "id": "4b11d63d-3ddc-4070-9b2b-5ca0edb08d0c" + }, + "source": [ + "- After setting up the DPO loss functions in the previous section, we can now finally train the model\n", + "- Note that this training function is the same one we used for pretraining and instruction finetuning, with minor differences:\n", + " - we swap the cross-entropy loss with our new DPO loss function\n", + " - we also track the rewards and reward margins, which are commonly used in RLHF and DPO contexts to track the training progress\n" + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "id": "f90d9325-77b2-417f-88ff-0a5174889413", + "metadata": { + "id": "f90d9325-77b2-417f-88ff-0a5174889413" + }, + "outputs": [], + "source": [ + "from previous_chapters import generate_and_print_sample\n", + "\n", + "\n", + "def train_model_dpo_simple(\n", + " policy_model, reference_model, train_loader, val_loader,\n", + " optimizer, num_epochs, beta,\n", + " eval_freq, eval_iter, start_context, tokenizer\n", + "):\n", + "\n", + " # Initialize lists to track losses and tokens seen\n", + " tracking = {\n", + " \"train_losses\": [],\n", + " \"train_chosen_rewards\": [],\n", + " \"train_rejected_rewards\": [],\n", + " \"val_losses\": [],\n", + " \"val_chosen_rewards\": [],\n", + " \"val_rejected_rewards\": [],\n", + " \"tokens_seen\": []\n", + " }\n", + " tokens_seen, global_step = 0, -1\n", + "\n", + " # Main training loop\n", + " for epoch in range(num_epochs):\n", + " policy_model.train() # Set model to training mode\n", + "\n", + " for batch_idx, batch in enumerate(train_loader):\n", + "\n", + " optimizer.zero_grad() # Reset loss gradients from previous batch iteration\n", + "\n", + " loss, chosen_rewards, rejected_rewards = compute_dpo_loss_batch(\n", + " batch=batch,\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " beta=beta\n", + " )\n", + "\n", + " loss.backward() # Calculate loss gradients\n", + " optimizer.step() # Update model weights using loss gradients\n", + "\n", + " tokens_seen += batch[\"chosen\"].numel()\n", + " global_step += 1\n", + "\n", + " # Optional evaluation step\n", + " if global_step % eval_freq == 0:\n", + " res = evaluate_dpo_loss_loader(\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " train_loader=train_loader,\n", + " val_loader=val_loader,\n", + " beta=beta,\n", + " eval_iter=eval_iter\n", + " )\n", + " tracking[\"train_losses\"].append(res[\"train_loss\"])\n", + " tracking[\"train_chosen_rewards\"].append(res[\"train_chosen_reward\"])\n", + " tracking[\"train_rejected_rewards\"].append(res[\"train_rejected_reward\"])\n", + " tracking[\"val_losses\"].append(res[\"val_loss\"])\n", + " tracking[\"val_chosen_rewards\"].append(res[\"val_chosen_reward\"])\n", + " tracking[\"val_rejected_rewards\"].append(res[\"val_rejected_reward\"])\n", + " tracking[\"tokens_seen\"].append(tokens_seen)\n", + " train_reward_margin = res[\"train_chosen_reward\"] - res[\"train_rejected_reward\"]\n", + " val_reward_margin = res[\"val_chosen_reward\"] - res[\"val_rejected_reward\"]\n", + "\n", + " print(\n", + " f\"Ep {epoch+1} (Step {global_step:06d}): \"\n", + " f\"Train loss {res['train_loss']:.3f}, Val loss {res['val_loss']:.3f}, \"\n", + " f\"Train reward margins {train_reward_margin:.3f}, \"\n", + " f\"Val reward margins {val_reward_margin:.3f}\"\n", + " )\n", + "\n", + " # Print a sample text after each epoch\n", + " generate_and_print_sample(\n", + " model=model,\n", + " tokenizer=tokenizer,\n", + " device=loss.device,\n", + " start_context=start_context\n", + " )\n", + "\n", + " return tracking" + ] + }, + { + "cell_type": "markdown", + "id": "820d4904-f819-4d62-bfb4-85cf28863683", + "metadata": { + "id": "820d4904-f819-4d62-bfb4-85cf28863683" + }, + "source": [ + "- Before we start the training, let's print the initial losses and rewards:" + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "id": "d53210c5-6d9c-46b0-af22-ee875c2806c5", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "d53210c5-6d9c-46b0-af22-ee875c2806c5", + "outputId": "8b1d2b39-16c5-4b99-e920-5b33d3c0f34d" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Training loss: 0.6931471824645996\n", + "Validation loss: 0.6931471824645996\n", + "Train reward margin: 0.0\n", + "Val reward margin: 0.0\n" + ] + } + ], + "source": [ + "torch.manual_seed(123) # For reproducibility due to the shuffling in the data loader\n", + "\n", + "res = evaluate_dpo_loss_loader(\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " train_loader=train_loader,\n", + " val_loader=val_loader,\n", + " beta=0.1,\n", + " eval_iter=5\n", + ")\n", + "\n", + "print(\"Training loss:\", res[\"train_loss\"])\n", + "print(\"Validation loss:\", res[\"val_loss\"])\n", + "\n", + "print(\"Train reward margin:\", res[\"train_chosen_reward\"] - res[\"train_rejected_reward\"])\n", + "print(\"Val reward margin:\", res[\"val_chosen_reward\"] - res[\"val_rejected_reward\"])" + ] + }, + { + "cell_type": "markdown", + "id": "4a006e91-df94-43ca-8025-1ba791e37bc4", + "metadata": { + "id": "4a006e91-df94-43ca-8025-1ba791e37bc4" + }, + "source": [ + "- Also, let's take a look at some of the initial model responses (the first 3 examples in the validation set):" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "id": "q4Ro9DrBa7zH", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "q4Ro9DrBa7zH", + "outputId": "b974d4bd-b92a-4a2a-bb7a-5a2a0d1eca11" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Convert the active sentence to passive: 'The chef cooks the meal every day.'\n", + "\n", + "Correct response:\n", + ">> The meal is cooked by the chef every day.\n", + "\n", + "Model response:\n", + ">> The meal is cooked every day by the chef.\n", + "\n", + "-------------------------------------\n", + "\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Classify an input string as either a noun or a verb.\n", + "\n", + "### Input:\n", + "Dance\n", + "\n", + "Correct response:\n", + ">> 'Dance' can be classified as a verb.\n", + "\n", + "Model response:\n", + ">> \"Dance\" can be classified as a verb.\n", + "\n", + "-------------------------------------\n", + "\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Rewrite the sentence using a metaphor.\n", + "\n", + "### Input:\n", + "The book is very interesting.\n", + "\n", + "Correct response:\n", + ">> The book is a page-turner.\n", + "\n", + "Model response:\n", + ">> The book is a treat.\n", + "\n", + "-------------------------------------\n", + "\n" + ] + } + ], + "source": [ + "torch.manual_seed(123)\n", + "\n", + "\n", + "for entry in val_data[:3]:\n", + "\n", + " input_text = format_input(entry)\n", + "\n", + " token_ids = generate(\n", + " model=model,\n", + " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", + " max_new_tokens=256,\n", + " context_size=BASE_CONFIG[\"context_length\"],\n", + " eos_id=50256\n", + " )\n", + " generated_text = token_ids_to_text(token_ids, tokenizer)\n", + " response_text = (\n", + " generated_text[len(input_text):]\n", + " .replace(\"### Response:\", \"\")\n", + " .strip()\n", + ")\n", + "\n", + " print(input_text)\n", + " print(f\"\\nCorrect response:\\n>> {entry['output']}\")\n", + " print(f\"\\nModel response:\\n>> {response_text.strip()}\")\n", + " print(\"\\n-------------------------------------\\n\")" + ] + }, + { + "cell_type": "markdown", + "id": "ac2386ae-5c4c-448e-bfbf-4ec0604b171e", + "metadata": { + "id": "ac2386ae-5c4c-448e-bfbf-4ec0604b171e" + }, + "source": [ + "- Above, we see the original model responses\n", + "- Note that the goal of DPO is to induce slight style changes; this means we want the model to generate similar but slightly more polite responses\n", + "- Before we execute the following code cell that starts the training, here are a few notes about some of the settings:\n", + " - we are only passing the parameters of the policy model into the `AdamW` optimizer; that's the model we want to optimize (we don't want to modify the reference model)\n", + " - we only train for 1 epoch; that's because DPO is very prone to collapse (the loss might improve, but the model will start generating nonsensical texts)\n", + " - in DPO, it's best to use a very small learning rate\n", + " - the beta value can be increased from 0.1 to 0.5 to reduce the effect of DPO (we use 0.1 here to make the results more noticeable)\n", + " - The training takes about 2 minutes on an A100 GPU, but it can also be trained in 4 minutes on a smaller L4 GPU; training on a M3 MacBook Air takes about 30 minutes" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "id": "54b739be-871e-4c97-bf14-ffd2c58e1311", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "54b739be-871e-4c97-bf14-ffd2c58e1311", + "outputId": "d98b08b0-c325-411e-a1a4-05e7403f0345" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Ep 1 (Step 000000): Train loss 0.692, Val loss 0.693, Train reward margins 0.019, Val reward margins 0.009\n", + "Ep 1 (Step 000005): Train loss 0.690, Val loss 0.691, Train reward margins 0.070, Val reward margins 0.052\n", + "Ep 1 (Step 000010): Train loss 0.687, Val loss 0.688, Train reward margins 0.126, Val reward margins 0.108\n", + "Ep 1 (Step 000015): Train loss 0.676, Val loss 0.685, Train reward margins 0.362, Val reward margins 0.173\n", + "Ep 1 (Step 000020): Train loss 0.676, Val loss 0.680, Train reward margins 0.351, Val reward margins 0.264\n", + "Ep 1 (Step 000025): Train loss 0.666, Val loss 0.676, Train reward margins 0.564, Val reward margins 0.359\n", + "Ep 1 (Step 000030): Train loss 0.672, Val loss 0.672, Train reward margins 0.456, Val reward margins 0.441\n", + "Ep 1 (Step 000035): Train loss 0.663, Val loss 0.669, Train reward margins 0.658, Val reward margins 0.511\n", + "Ep 1 (Step 000040): Train loss 0.666, Val loss 0.666, Train reward margins 0.597, Val reward margins 0.574\n", + "Ep 1 (Step 000045): Train loss 0.648, Val loss 0.662, Train reward margins 0.982, Val reward margins 0.660\n", + "Ep 1 (Step 000050): Train loss 0.648, Val loss 0.659, Train reward margins 0.993, Val reward margins 0.734\n", + "Ep 1 (Step 000055): Train loss 0.647, Val loss 0.656, Train reward margins 1.014, Val reward margins 0.799\n", + "Ep 1 (Step 000060): Train loss 0.652, Val loss 0.653, Train reward margins 0.893, Val reward margins 0.870\n", + "Ep 1 (Step 000065): Train loss 0.631, Val loss 0.650, Train reward margins 1.361, Val reward margins 0.948\n", + "Ep 1 (Step 000070): Train loss 0.618, Val loss 0.646, Train reward margins 1.699, Val reward margins 1.038\n", + "Ep 1 (Step 000075): Train loss 0.617, Val loss 0.642, Train reward margins 1.733, Val reward margins 1.121\n", + "Ep 1 (Step 000080): Train loss 0.592, Val loss 0.639, Train reward margins 2.333, Val reward margins 1.194\n", + "Ep 1 (Step 000085): Train loss 0.610, Val loss 0.636, Train reward margins 1.907, Val reward margins 1.275\n", + "Ep 1 (Step 000090): Train loss 0.650, Val loss 0.633, Train reward margins 0.964, Val reward margins 1.353\n", + "Ep 1 (Step 000095): Train loss 0.607, Val loss 0.630, Train reward margins 1.962, Val reward margins 1.423\n", + "Ep 1 (Step 000100): Train loss 0.600, Val loss 0.627, Train reward margins 2.127, Val reward margins 1.500\n", + "Ep 1 (Step 000105): Train loss 0.590, Val loss 0.624, Train reward margins 2.458, Val reward margins 1.564\n", + "Ep 1 (Step 000110): Train loss 0.607, Val loss 0.622, Train reward margins 1.976, Val reward margins 1.621\n", + "Ep 1 (Step 000115): Train loss 0.621, Val loss 0.620, Train reward margins 1.605, Val reward margins 1.682\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Rewrite the sentence using a metaphor. ### Input: The book is very interesting. ### Response: The book is a treat.<|endoftext|>The following is an instruction that describes a task. Write a response that appropriately completes the request. ### Input: The assignment was written by the student. ### Response\n", + "Training completed in 1.69 minutes.\n" + ] + } + ], + "source": [ + "import time\n", + "\n", + "start_time = time.time()\n", + "\n", + "torch.manual_seed(123)\n", + "\n", + "\n", + "optimizer = torch.optim.AdamW(policy_model.parameters(), lr=5e-6, weight_decay=0.01)\n", + "\n", + "num_epochs = 1\n", + "tracking = train_model_dpo_simple(\n", + " policy_model=policy_model,\n", + " reference_model=reference_model,\n", + " train_loader=train_loader,\n", + " val_loader=val_loader,\n", + " optimizer=optimizer,\n", + " num_epochs=num_epochs,\n", + " beta=0.1, # value between 0.1 and 0.5\n", + " eval_freq=5,\n", + " eval_iter=5,\n", + " start_context=format_input(val_data[2]),\n", + " tokenizer=tokenizer\n", + ")\n", + "\n", + "end_time = time.time()\n", + "execution_time_minutes = (end_time - start_time) / 60\n", + "print(f\"Training completed in {execution_time_minutes:.2f} minutes.\")" + ] + }, + { + "cell_type": "markdown", + "id": "eba8ea88-8771-4eb9-855d-2fe1ca2dc2fa", + "metadata": { + "id": "eba8ea88-8771-4eb9-855d-2fe1ca2dc2fa" + }, + "source": [ + "- As we can see based on the tracked results above, the loss improves\n", + "- Also, the reward margins, which is the difference between the rewards of the chosen and the rejected responses, improve, which is a good sign\n", + "- Let's take a more concrete look at these results in the next section" + ] + }, + { + "cell_type": "markdown", + "id": "11e23989-92bd-4ac2-a4bc-65d4c7ac334e", + "metadata": { + "id": "11e23989-92bd-4ac2-a4bc-65d4c7ac334e" + }, + "source": [ + " \n", + "# 6) Analyzing the results" + ] + }, + { + "cell_type": "markdown", + "id": "66d7d5fe-c617-45cb-8ea9-ddc7baa22654", + "metadata": { + "id": "66d7d5fe-c617-45cb-8ea9-ddc7baa22654" + }, + "source": [ + "- Let's begin analyzing the results by plotting the DPO loss:" + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "id": "8ddcc66f-cd7c-4f46-96ea-af919ea1a199", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 307 + }, + "id": "8ddcc66f-cd7c-4f46-96ea-af919ea1a199", + "outputId": "c7164b26-8d32-41d1-8c6a-ab835d58d4c5" + }, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAeoAAAEiCAYAAAA21pHjAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABs/klEQVR4nO3deVhU5dvA8e/MsO8gsimLKCpuiCCIqGXillmapZmVWtkvxS1b1NdS27TSzErTtFIrTcvSzH3Jpdz3FXFBBRdARXZZ57x/jAySqIDADHh/rutcwpnnnHM/jHDPec6zqBRFURBCCCGEUVIbOgAhhBBC3J0kaiGEEMKISaIWQgghjJgkaiGEEMKISaIWQgghjJgkaiGEEMKISaIWQgghjJgkaiGEEMKISaIWQgghjJgkaiGqsfPnz6NSqTh06JChQxFClJEkaiGMnEqluuc2ceJEQ4cohKhAJoYOQAhxb1euXNF/vWTJEsaPH090dLR+n42NjSHCEkJUErmjFsLIubm56Td7e3tUKpX+excXF6ZNm0bt2rUxNzenefPmrF279q7nys/P5+WXX6Zhw4bExsYC8Oeff9KiRQssLCzw9fXl/fffJy8vT3+MSqXiu+++o2fPnlhZWeHn58eKFSv0r9+4cYN+/fpRs2ZNLC0t8fPzY968eXeNYenSpTRt2hRLS0tq1KhBREQEGRkZ+te/++47/P39sbCwoGHDhnzzzTdFjo+Li6N37944ODjg5OTEU089xfnz5/WvDxgwgB49ejB16lTc3d2pUaMGkZGR5ObmlvhnLoRRUYQQVca8efMUe3t7/ffTpk1T7OzslF9++UU5efKk8s477yimpqbKqVOnFEVRlHPnzimAcvDgQSUrK0vp2bOnEhgYqCQmJiqKoijbtm1T7OzslPnz5ytnz55V1q9fr/j4+CgTJ07UXwNQateurSxatEg5ffq0Mnz4cMXGxka5fv26oiiKEhkZqTRv3lzZu3evcu7cOWXDhg3KihUrio3/8uXLiomJiTJt2jTl3LlzypEjR5SZM2cqaWlpiqIoys8//6y4u7srv//+uxITE6P8/vvvipOTkzJ//nxFURQlJydH8ff3V15++WXlyJEjyokTJ5Tnn39eadCggZKdna0oiqL0799fsbOzU15//XUlKipK+euvvxQrKytlzpw55ftmCFFJJFELUYX8N1F7eHgoH3/8cZEyLVu2VIYMGaIoSmGi/ueff5QOHToobdq0UZKTk/VlO3TooEyaNKnI8T/99JPi7u6u/x5Q3n33Xf336enpCqCsWbNGURRF6d69uzJw4MASxb9//34FUM6fP1/s63Xr1lUWLVpUZN+HH36ohIWF6WNr0KCBotVq9a9nZ2crlpaWyrp16xRF0SVqb29vJS8vT1/m2WefVfr06VOiGIUwNvKMWogqKjU1lcuXLxMeHl5kf3h4OIcPHy6yr2/fvtSuXZu///4bS0tL/f7Dhw+zfft2Pv74Y/2+/Px8srKyyMzMxMrKCoBmzZrpX7e2tsbOzo7ExEQABg8eTK9evThw4ACdOnWiR48etG7dutiYAwIC6NChA02bNqVz58506tSJZ555BkdHRzIyMjh79iyvvPIKgwYN0h+Tl5eHvb29Pt4zZ85ga2tb5LxZWVmcPXtW/33jxo3RaDT6793d3Tl69Og9fppCGC9J1EI8BB5//HF+/vlndu7cyWOPPabfn56ezvvvv8/TTz99xzEWFhb6r01NTYu8plKp0Gq1AHTt2pULFy6wevVqNmzYQIcOHYiMjGTq1Kl3nFOj0bBhwwZ27NjB+vXr+frrrxk3bhy7d+/WfyiYO3cuoaGhdxxXEG9QUBALFy6849w1a9YsUbxCVDWSqIWoouzs7PDw8GD79u088sgj+v3bt28nJCSkSNnBgwfTpEkTnnzySVatWqUv36JFC6Kjo6lXr94DxVKzZk369+9P//79adu2LW+//XaxiRp0STM8PJzw8HDGjx+Pt7c3y5YtY9SoUXh4eBATE0O/fv2KPbZFixYsWbIEFxcX7OzsHihmIaoKSdRCVGFvv/02EyZMoG7dujRv3px58+Zx6NChYu84hw0bRn5+Pk888QRr1qyhTZs2jB8/nieeeAIvLy+eeeYZ1Go1hw8f5tixY3z00UclimH8+PEEBQXRuHFjsrOzWblyJf7+/sWW3b17N5s2baJTp064uLiwe/durl69qi///vvvM3z4cOzt7enSpQvZ2dns27ePGzduMGrUKPr168eUKVN46qmn+OCDD6hduzYXLlzgjz/+4J133qF27dpl/2EKYaQkUQtRhQ0fPpyUlBTefPNNEhMTadSoEStWrMDPz6/Y8iNHjkSr1fL444+zdu1aOnfuzMqVK/nggw/49NNPMTU1pWHDhrz66qsljsHMzIyxY8dy/vx5LC0tadu2LYsXLy62rJ2dHdu2bWP69Omkpqbi7e3N559/TteuXQF49dVXsbKyYsqUKbz99ttYW1vTtGlTRo4cCYCVlRXbtm1j9OjRPP3006SlpVGrVi06dOggd9ii2lIpiqIYOgghhBBCFE8mPBFCCCGMmCRqIYQQwohJohZCCCGMmCRqIYQQwohJohZCCCGMmCRqIYQQwohJoi5nM2fOxMfHBwsLC0JDQ9mzZ0+lXn/btm10794dDw8PVCoVy5cvL/K6oiiMHz8ed3d3LC0tiYiI4PTp00XKJCUl0a9fP+zs7HBwcOCVV14hPT29SJkjR47Qtm1bLCws8PT05LPPPrsjlt9++42GDRtiYWFB06ZNWb16dYnrMXnyZFq2bImtrS0uLi706NGjyBrMoJvfOTIykho1amBjY0OvXr1ISEgoUiY2NpZu3bphZWWFi4sLb7/9dpElHAG2bNlCixYtMDc3p169esyfP/+OeMr6vs6aNYtmzZphZ2eHnZ0dYWFhrFmzpkrVoTiffPIJKpVKP765KtVl4sSJqFSqIlvDhg2rXD0ALl26xAsvvECNGjWwtLSkadOm7Nu3T/96Vfl99/HxueM9UalUREZGAlXrPakQhl0TpHpZvHixYmZmpvzwww/K8ePHlUGDBikODg5KQkJCpcWwevVqZdy4ccoff/yhAMqyZcuKvP7JJ58o9vb2yvLly5XDhw8rTz75pFKnTh3l5s2b+jJdunRRAgIClF27din//POPUq9ePaVv377611NSUhRXV1elX79+yrFjx5RffvlFsbS0VL799lt9me3btysajUb57LPPlBMnTijvvvuuYmpqqhw9erRE9ejcubMyb9485dixY8qhQ4eUxx9/XPHy8lLS09P1ZV5//XXF09NT2bRpk7Jv3z6lVatWSuvWrfWv5+XlKU2aNFEiIiKUgwcPKqtXr1acnZ2VsWPH6svExMQoVlZWyqhRo5QTJ04oX3/9taLRaJS1a9fqyzzI+7pixQpl1apVyqlTp5To6Gjl//7v/xRTU1Pl2LFjVaYO/7Vnzx7Fx8dHadasmTJixAj9/qpSlwkTJiiNGzdWrly5ot+uXr1a5eqRlJSkeHt7KwMGDFB2796txMTEKOvWrVPOnDmjL1NVft8TExOLvB8bNmxQAGXz5s1V6j2pKJKoy1FISIgSGRmp/z4/P1/x8PBQJk+ebJB4/puotVqt4ubmpkyZMkW/Lzk5WTE3N1d++eUXRVEU5cSJEwqg7N27V19mzZo1ikqlUi5duqQoiqJ88803iqOjo379X0VRlNGjRysNGjTQf9+7d2+lW7duReIJDQ1V/ve//5WpLomJiQqgbN26VR+3qamp8ttvv+nLREVFKYCyc+dORVF0H1rUarUSHx+vLzNr1izFzs5OH/s777yjNG7cuMi1+vTpo3Tu3Fn/fXm/r46Ojsp3331XJeuQlpam+Pn5KRs2bFAeeeQRfaKuSnWZMGGCEhAQUOxrVakeo0ePVtq0aXPX16vy7/uIESOUunXrKlqttkq9JxVFmr7LSU5ODvv37yciIkK/T61WExERwc6dOw0YWaFz584RHx9fJEZ7e3tCQ0P1Me7cuRMHBweCg4P1ZSIiIlCr1ezevVtfpl27dpiZmenLdO7cmejoaG7cuKEvc/t1CsqU9WeRkpICgJOTEwD79+8nNze3yDUaNmyIl5dXkbo0bdoUV1fXIjGkpqZy/PjxEsVZnu9rfn4+ixcvJiMjg7CwsCpZh8jISLp163bH9apaXU6fPo2Hhwe+vr7069eP2NjYKlePFStWEBwczLPPPouLiwuBgYHMnTtX/3pV/X3Pycnh559/5uWXX0alUlWp96SiSKIuJ9euXSM/P7/IfxQAV1dX4uPjDRRVUQVx3CvG+Ph4XFxcirxuYmKCk5NTkTLFneP2a9ytTFl+FlqtlpEjRxIeHk6TJk305zczM8PBweGedSlrnKmpqdy8ebNc3tejR49iY2ODubk5r7/+OsuWLaNRo0ZVqg4Aixcv5sCBA0yePPmO16pSXUJDQ5k/fz5r165l1qxZnDt3jrZt25KWllal6hETE8OsWbPw8/Nj3bp1DB48mOHDh7NgwYIisVS13/fly5eTnJzMgAED9OeuKu9JRZFFOYTRi4yM5NixY/z777+GDqVMGjRowKFDh0hJSWHp0qX079+frVu3GjqsUomLi2PEiBFs2LChyDrVVVHBAiAAzZo1IzQ0FG9vb3799VcsLS0NGFnpaLVagoODmTRpEgCBgYEcO3aM2bNn079/fwNHV3bff/89Xbt2xcPDw9ChGA25oy4nzs7OaDSaO3oiJiQk4ObmZqCoiiqI414xurm5kZiYWOT1vLw8kpKSipQp7hy3X+NuZUr7sxg6dCgrV65k8+bNRZYwdHNzIycnh+Tk5HvWpaxx2tnZYWlpWS7vq5mZGfXq1SMoKIjJkycTEBDAl19+WaXqsH//fhITE2nRogUmJiaYmJiwdetWvvrqK0xMTHB1da0ydfkvBwcH6tevz5kzZ6rUe+Lu7k6jRo2K7PP399c341fF3/cLFy6wcePGIqu3VaX3pKJIoi4nZmZmBAUFsWnTJv0+rVbLpk2bCAsLM2BkherUqYObm1uRGFNTU9m9e7c+xrCwMJKTk9m/f7++zN9//41WqyU0NFRfZtu2beTm5urLbNiwgQYNGuDo6Kgvc/t1CsqU9GehKApDhw5l2bJl/P3339SpU6fI60FBQZiamha5RnR0NLGxsUXqcvTo0SJ/iDZs2ICdnZ3+D9z94qyI91Wr1ZKdnV2l6tChQweOHj3KoUOH9FtwcDD9+vXTf11V6vJf6enpnD17Fnd39yr1noSHh98xZPHUqVN4e3sDVev3vcC8efNwcXGhW7du+n1V6T2pMAbtylbNLF68WDE3N1fmz5+vnDhxQnnttdcUBweHIj0RK1paWppy8OBB5eDBgwqgTJs2TTl48KBy4cIFRVF0wzUcHByUP//8Uzly5Ijy1FNPFTtcIzAwUNm9e7fy77//Kn5+fkWGayQnJyuurq7Kiy++qBw7dkxZvHixYmVldcdwDRMTE2Xq1KlKVFSUMmHChFIN1xg8eLBib2+vbNmypciwjczMTH2Z119/XfHy8lL+/vtvZd++fUpYWJgSFhamf71gyEanTp2UQ4cOKWvXrlVq1qxZ7JCNt99+W4mKilJmzpxZ7JCNsr6vY8aMUbZu3aqcO3dOOXLkiDJmzBhFpVIp69evrzJ1uJvbe31Xpbq8+eabypYtW5Rz584p27dvVyIiIhRnZ2clMTGxStVjz549iomJifLxxx8rp0+fVhYuXKhYWVkpP//8s75MVfl9VxRdD2svLy9l9OjRd7xWVd6TiiKJupx9/fXXipeXl2JmZqaEhIQou3btqtTrb968WQHu2Pr3768oim7Ixnvvvae4uroq5ubmSocOHZTo6Ogi57h+/brSt29fxcbGRrGzs1MGDhyopKWlFSlz+PBhpU2bNoq5ublSq1Yt5ZNPPrkjll9//VWpX7++YmZmpjRu3FhZtWpVietRXB0AZd68efoyN2/eVIYMGaI4OjoqVlZWSs+ePZUrV64UOc/58+eVrl27KpaWloqzs7Py5ptvKrm5uXf8zJo3b66YmZkpvr6+Ra5RoKzv68svv6x4e3srZmZmSs2aNZUOHTrok3RVqcPd/DdRV5W69OnTR3F3d1fMzMyUWrVqKX369Cky9riq1ENRFOWvv/5SmjRpopibmysNGzZU5syZU+T1qvL7riiKsm7dOgW4Iz5FqVrvSUVQKYqiGORWXgghhBD3Jc+ohRBCCCMmiVoIIYQwYpKohRBCCCMmiVoIIYQwYpKohRBCCCMmiVoIIYQwYpKoK0B2djYTJ04kOzvb0KE8EKmH8akudaku9YDqU5fqUg+oXnUBkHHUFSA1NRV7e3tSUlKws7MzdDhlJvUwPtWlLtWlHlB96lJd6gHVqy4gd9RCCCGEUZNELYQQQhgxWY+6GHl5eRw8eBBXV1fU6tJ/lklLSwPg0qVLpKamlnd4lUbqYXyqS12qSz2g+tSlutQDqkZdtFotCQkJBAYGYmJy71Qsz6iLsXfvXkJCQgwdhhBCiGpuz549tGzZ8p5l5I66GK6uroDuB+ju7m7gaIQQQlQ3V65cISQkRJ9v7kUSdTEKmrvd3d2pXbu2gaMRQghRXZXk8ap0JhNCCCGMmCRqIYQQwohJohZCCCGMmDyjFkKI2+Tn55Obm2voMEQVZ2pqikajKZdzSaKuaFotXD8NNfygDGOyhRCVQ1EU4uPjSU5ONnQooppwcHDAzc0NlUr1QOeRRF2Bbubks+DPtbx+/HkUcztUtYKgdstbWzBYORk6RCHELQVJ2sXFBSsrqwf+4yoeXoqikJmZSWJiIsADD/OVRF2B9l1IYs/BA/Q3NcMyOxViNuu2Ak6+uqRdK1iXuF2bgImZ4QIW4iGVn5+vT9I1atQwdDiiGrC0tAQgMTERFxeXB2oGl0RdgWpYm+Ma3IMnzrbGIukkzdVnCFSfobnqDPXUlyEpRrcdWaI7QGMOHs3BMwQ6fgjyiV6ISlHwTNrKysrAkYjqpOD/U25uriRqY9XIw47JTzcF4EpKa3aevc7Os9eZHnOd1BtXaa4+S3PVWQLVpwlUn8EhPwPidpOZdgPNY+9jbnLrjf3nc7CuCf7dwdLRgDUSonqT5m5Rnsrr/5Mk6kribm/J0y1q83QL3UxncUmZ7Iy5zq6z1/kt5jpXUm7io4onUHUGJVHFmonrCfJ2pI23Na/v/gx1fpauibwgUacngrkdmFoYsFZCCCEqmiRqA/F0ssLTyYrewZ4oisL565m6O+6YIHaevU52ejY7zl7n6Nk4ckweJ9DkAks3ZfFIg4u0q++My7qxcHIV1GkL9SKgbgeoUVeay4UQD8zHx4eRI0cycuTIEpXfsmUL7du358aNGzg4OFRYXPPnz2fkyJEPXc98SdRGQKVSUcfZmjrO1jwf6oWiKJy9ms7Os9fZcfY6P5x5jtSsPDhyhb+OXAHgb+v9+ObfhNPrdRuAgzfU66BL3HXagbmtAWslhKho92tanTBhAhMnTiz1effu3Yu1tXWJy7du3ZorV65gb29f6muJ+5NEbYRUKhX1XGyp52LLi2E+5OVrORSXzNZTV9kSfZWjl1J4LOMjGqjieER9mMdMjxGkOolp8gXY94NuU5uAZyuo9xjU6whuTeVuW4hq5sqVK/qvlyxZwvjx44mOjtbvs7Gx0X+tKAr5+fn3XfsYoGbNmqWKw8zMDDc3t1IdI0pOZuCoAkw0aoJ9nHizUwP+GtaGfe9GMK13cxo0a8Vv5k/zXNZYmt38loE5bzM/rxOX1O6gzYML/8KmD+DbtvBlAMTtNXRVhBDlyM3NTb/Z29ujUqn03588eRJbW1vWrFlDUFAQ5ubm/Pvvv5w9e5annnoKV1dXbGxsaNmyJRs3bixyXh8fH6ZPn67/XqVS8d1339GzZ0+srKzw8/NjxYoV+te3bNmCSqXSN0nPnz8fBwcH1q1bh7+/PzY2NnTp0qXIB4u8vDyGDx+Og4MDNWrUYPTo0fTv358ePXqU6mcwa9Ys6tati5mZGQ0aNOCnn37Sv6YoChMnTsTLywtzc3M8PDwYPny4/vVvvvkGPz8/LCwscHV15ZlnninVtSuL3FFXQc425vqOaflahSMXC+623Xj/YiATM8FLlUA79RE6mBwhXH0M0+RYVI7ehSc59w8o+eAdDhpTw1VGCCOlKAo3c/MNcm1LU0259RgeM2YMU6dOxdfXF0dHR+Li4nj88cf5+OOPMTc358cff6R79+5ER0fj5eV11/O8//77fPbZZ0yZMoWvv/6afv36ceHCBZycip+4KTMzk6lTp/LTTz+hVqt54YUXeOutt1i4cCEAn376KQsXLmTevHn4+/vz5Zdfsnz5ctq3b1/iui1btowRI0Ywffp0IiIiWLlyJQMHDqR27dq0b9+e33//nS+++ILFixfTuHFj4uPjOXz4MAD79u1j+PDh/PTTT7Ru3ZqkpCT++eefUvxkK48k6ipOo1YR6OVIoJcjIyPqk5SRwz+nr7I1+iprT3vyc3pHLMhmgPd13rJ0LnzDt34K5/+BLp9Aq8GGrIIQRulmbj6Nxq8zyLVPfNAZK7Py+fP8wQcf0LFjR/33Tk5OBAQE6L//8MMPWbZsGStWrGDo0KF3Pc+AAQPo27cvAJMmTeKrr75iz549dOnSpdjyubm5zJ49m7p16wIwdOhQPvjgA/3rX3/9NWPHjqVnz54AzJgxg9WrV5eqblOnTmXAgAEMGTIEgFGjRrFr1y6mTp1K+/btiY2Nxc3NjYiICExNTfHy8iIkJASA2NhYrK2teeKJJ7C1tcXb25vAwMBSXb+ySNN3NeNkbcZTzWsxrU9z9vxfBD+9EgKmlsy+4MGEFcdRFAUUBWrUAytnaPB44cFHl8JvA+H4MshON1wlhBDlJjg4uMj36enpvPXWW/j7++Pg4ICNjQ1RUVHExsbe8zzNmjXTf21tbY2dnZ1+isziWFlZ6ZM06KbRLCifkpJCQkKCPmkCaDQagoKCSlW3qKgowsPDi+wLDw8nKioKgGeffZabN2/i6+vLoEGDWLZsGXl5eQB07NgRb29vfH19efHFF1m4cCGZmZmlun5lkTvqakytVtHWryZfPhfI6z/vZ+HuWLxrWPFau7rQfTp0+xzUt82Wc/Q3OLUWjv+hmyWtXgfwfxIadJGJVsRDx9JUw4kPOhvs2uXlv72333rrLTZs2MDUqVOpV68elpaWPPPMM+Tk5NzzPKamRR+RqVQqtFptqcorilLK6B+Mp6cn0dHRbNy4kQ0bNjBkyBCmTJnC1q1bsbW15cCBA2zZsoX169czfvx4Jk6cyN69eyt0iFlZyB31Q6BzYzfe7dYIgEmrT7L66K0OHer//DF45B0IHwGOdSA/G6JXw/LXYYof/PE/uHK4kiMXwnBUKhVWZiYG2SpyhrTt27czYMAAevbsSdOmTXFzc+P8+fMVdr3i2Nvb4+rqyt69hR1c8/PzOXDgQKnO4+/vz/bt24vs2759O40aNdJ/b2lpSffu3fnqq6/YsmULO3fu5OjRowCYmJgQERHBZ599xpEjRzh//jx///33A9SsYsgd9UPi5XAfYq9nsGDnBd5YcghXOwuCvP9zl1wrSLdFvA8JxyHqL4haAYkn4Mhi3ebTFsIiwa+zLNspRBXk5+fHH3/8Qffu3VGpVLz33nv3vDOuKMOGDWPy5MnUq1ePhg0b8vXXX3Pjxo1SfUh5++236d27N4GBgURERPDXX3/xxx9/6Huxz58/n/z8fEJDQ7GysuLnn3/G0tISb29vVq5cSUxMDO3atcPR0ZHVq1ej1Wpp0KBBRVW5zOQv7UNCpVIxvntjIvxdyM7TMujHfVy4nnG3wuDWBNqPhSE74dW/oUkvUGl0HdB+eQ5mBMOeuZB7s3IrIoR4INOmTcPR0ZHWrVvTvXt3OnfuTIsWLSo9jtGjR9O3b19eeuklwsLCsLGxoXPnzlhYlHxa5B49evDll18ydepUGjduzLfffsu8efN49NFHAd160HPnziU8PJxmzZqxceNG/vrrL2rUqIGDgwN//PEHjz32GP7+/syePZtffvmFxo0bV1CNy06lVPZDgyrg4sWLeHp6EhcXR+3atQ0dTrnKzMmjz7e7OHopBV9na/4Y0hoHqxIurZlyEXZ/C/sXQHYKmFrBG8dlXW1R5WVlZXHu3Dnq1KlTqkQhyo9Wq8Xf35/evXvz4YcfGjqccnGv/1elyTMGv6OeOXMmPj4+WFhYEBoayp49e+5ZPjk5mcjISNzd3TE3N6d+/fpFuvTn5+fz3nvvUadOHSwtLalbty4ffvhhpXdiMFZWZiZ83z+YWg6WxFzL4LUf95OdV8Kxova1odOHMOoEdP0M2o4qmqS3fAKXD1VI3EKI6uXChQvMnTuXU6dOcfToUQYPHsy5c+d4/vnnDR2a0TFool6yZAmjRo1iwoQJHDhwgICAADp37nzXLv85OTl07NiR8+fPs3TpUqKjo5k7dy61atXSl/n000+ZNWsWM2bMICoqik8//ZTPPvuMr7/+urKqZfRc7Cz4YUBLbM1N2HM+iXeWHindBxlzGwj9H7R7u3Bf3F7YMhm+7wiZSeUftBCiWlGr1cyfP5+WLVsSHh7O0aNH2bhxI/7+/oYOzegYtDPZtGnTGDRoEAMHDgRg9uzZrFq1ih9++IExY8bcUf6HH34gKSmJHTt26Lv++/j4FCmzY8cOnnrqKbp166Z//ZdffrnvnfrDpoGbLbNeCGLAvD38eegyXk5WvNnpATpRWDpA095galn0LjvqL6j7GJiVfIJ/IUT15+npeUePbVE8g91R5+TksH//fiIiIgqDUauJiIhg586dxR6zYsUKwsLCiIyMxNXVlSZNmjBp0iTy8wubblu3bs2mTZs4deoUAIcPH+bff/+la9euFVuhKqiNnzOTejYF4Ou/z/Drvriyn8zZD3rNhe5fFu6LPwZLXoBpjWDj+5AW/4ARCyHEw8dgd9TXrl0jPz8fV1fXIvtdXV05efJkscfExMTw999/069fP1avXs2ZM2cYMmQIubm5TJgwAdDNa5uamkrDhg3RaDTk5+fz8ccf069fv7vGkp2dTXZ2tv77tLS0cqhh1dC7pSexSZnM2HyG//vjKB72lrTxcy77CW8fWpF5DZx8ISkG/p0GO2dAs97QejjUNL4hEEIIYYwM3pmsNLRaLS4uLsyZM4egoCD69OnDuHHjmD17tr7Mr7/+ysKFC1m0aBEHDhxgwYIFTJ06lQULFtz1vJMnT8be3l6/3T5Y/mHwZqf6PNXcgzytwuCf9xMdX04fVHwfhaH7oM9C8AyF/Bw4+DPMDIFFfeD8v7rpTIUQQtyVwRK1s7MzGo2GhISEIvsTEhLuuq6pu7s79evXR6MpnFHL39+f+Ph4/fR3b7/9NmPGjOG5556jadOmvPjii7zxxhtMnjz5rrGMHTuWlJQU/XbixIlyqGHVoVKp+OyZZoT4OJGWncfL8/eSmJpVPidXa8D/CXhlPby8Hho+Aah0U5XO7wZzH9PNLa41zCpFQghh7AyWqM3MzAgKCmLTpk36fVqtlk2bNhEWFlbsMeHh4Zw5c6bILDqnTp3C3d0dMzPdWODMzEzU/5kxS6PR3HPmHXNzc+zs7PSbra3tg1StSjI30fDti0H4OltzKfkmryzYR2ZOXvlexCsUnlsIw/ZD8MtgYgGXD8BvA+CrQNg9B3LuMgmLEEI8pAza9D1q1Cjmzp3LggULiIqKYvDgwWRkZOh7gb/00kuMHTtWX37w4MEkJSUxYsQITp06xapVq5g0aRKRkZH6Mt27d+fjjz9m1apVnD9/nmXLljFt2jT9Umri7hytzZg3sCVO1mYcvZTC8F8Okq+tgKbpGnXhiS9g5DF4ZDRYOkHyBVjzNlw7Vf7XE0KIqkwxsK+//lrx8vJSzMzMlJCQEGXXrl361x555BGlf//+Rcrv2LFDCQ0NVczNzRVfX1/l448/VvLy8vSvp6amKiNGjFC8vLwUCwsLxdfXVxk3bpySnZ1d4pji4uIUQImLi3vg+lVF+84nKX7jViveo1cqE/48VvEXzM5QlN1zFGXZkKL7Dy9RlKunK/764qF38+ZN5cSJE8rNmzcNHYpBPPLII8qIESP033t7eytffPHFPY8BlGXLlj3wtcvrPPcyYcIEJSAgoEKvUZx7/b8qTZ4x+KIcQ4cOveti5Vu2bLljX1hYGLt27brr+WxtbZk+fTrTp08vpwgfPkHejnzRuzmRiw4wf8d5ajta0jOwFmlZeaRn55GWlUdaVi7p2bd/n0d6dq7u36w80rIL92m1MPjRurzQyrv4C5pZQcigovvSE+HPoboOaEN2gotMgiDEf3Xv3p3c3FzWrl17x2v//PMP7dq14/Dhw0XWki6JvXv33rE85oOaOHEiy5cv59ChQ0X2X7lyBUdHWUb3XgyeqIVx6tbMnYs3GjJ5zUk+WhXFR6uiHuh87y4/Rm6+loHhdUp2QE66bqKUzGtQs2Hh/ui14B4Adu4PFI8Q1cErr7xCr169uHjx4h3zRc+bN4/g4OBSJ2mAmjVrlleI93W3zsOiUJUaniUq12vtfHmtnS8atW5stLWZBlc7c+q52NDc04G2fs483tSN3sG1eaVNHUZ08OPdbv582qspM59vwYKXQ/hjSGtef6QuAO//dYIFO86X7OJOvvD8Yui/snBsdlYqLB0IXzTSDe+KWgn5uRVQcyGqhieeeIKaNWsyf/78IvvT09P57bffeOWVV7h+/Tp9+/alVq1aWFlZ0bRpU3755Zd7ntfHx6dIq+Tp06dp164dFhYWNGrUiA0bNtxxzOjRo6lfvz5WVlb4+vry3nvvkZur+/2cP38+77//PocPH0alUqFSqfQxq1Qqli9frj/P0aNHeeyxx7C0tKRGjRq89tprpKen618fMGAAPXr0YOrUqbi7u1OjRg0iIyP11yoJrVbLBx98QO3atTE3N6d58+ZFWiVycnIYOnQo7u7uWFhY4O3trR85pCgKEydOxMvLC3Nzczw8PBg+fHiJr10Wckct7kqlUvF/j/szqmN9TDVqfcIurUBPB9Qq+GbLWSasOI5KBS+F+ZTsYNPbVpxJT9TdTcfu1A3vOrUWrF2geV8IfAmc65UpPiHuqSwjETTmoLn15zU/D/KzQaXWTbF7v/OWYrpdExMTXnrpJebPn8+4ceP0azn/9ttv5Ofn07dvX9LT0wkKCmL06NHY2dmxatUqXnzxRerWrUtISMh9r6HVann66adxdXVl9+7dpKSkMHLkyDvK2draMn/+fDw8PDh69CiDBg3C1taWd955hz59+nDs2DHWrl2rXyva3t7+jnNkZGTQuXNnwsLC2Lt3L4mJibz66qsMHTq0yIeRzZs34+7uzubNmzlz5gx9+vShefPmDBo06I5zFufLL7/k888/59tvvyUwMJAffviBJ598kuPHj+Pn58dXX33FihUr+PXXX/Hy8iIuLo64ON3Mjb///jtffPEFixcvpnHjxsTHx3P48OESXbesJFGL+7Iw1dy/0D2oVCre7twArQKzt55l/J/HUalUvHi3Z9Z341wPXl4LV0/BwZ/g8C+QkQjbv9RtXq2hxYvQ6CmZW1yUn0kepT/m2fnQ+NZIk5N/6YYgereBgasKy0xvCpnX7zx2YkqpLvXyyy8zZcoUtm7dql+Hed68efTq1Us/idNbb72lLz9s2DDWrVvHr7/+WqJEvXHjRk6ePMm6devw8ND9LCZNmnTHtMzvvvuu/msfHx/eeustFi9ezDvvvIOlpSU2NjaYmJjcs6l70aJFZGVl8eOPP+qfkc+YMYPu3bvz6aef6meydHR0ZMaMGWg0Gho2bEi3bt3YtGlTiRP11KlTGT16NM899xygW8xp8+bNTJ8+nZkzZxIbG4ufnx9t2rRBpVLh7V34tyo2NhY3NzciIiIwNTXFy8urRD/HByFN36JSqFQqRndpwGvtfAF4b/kxFu2OLdvJata/tdxmlG7Ws/pddHcrsTtg+WCY2gD+GgGX9svMZ6Laa9iwIa1bt+aHH34A4MyZM/zzzz+88sorgG7p3w8//JCmTZvi5OSEjY0N69atIza2ZL9/UVFReHp66pM0UOxcF0uWLCE8PBw3NzdsbGx49913S3yN268VEBBQpCNbeHg4Wq2W6Oho/b7GjRsXmfjK3d39rqsu/ldqaiqXL18mPDy8yP7w8HCionR9cQYMGMChQ4do0KABw4cPZ/369fpyzz77LDdv3sTX15dBgwaxbNky8vLKec6J/5A7alFpVCoVY7s2RKtV+O7fc/zfsqOoVfBciFfZTqgx1c165v8EpF6GQ4t0U5TeOAf75+u2Fv3hya/KsxriYfN/l0t/jMa88OuG3XXnUP3nvmjk0QeL6zavvPIKw4YNY+bMmcybN4+6devyyCOPADBlyhS+/PJLpk+fTtOmTbG2tmbkyJH62RzLw86dO+nXrx/vv/8+nTt3xt7ensWLF/P555+X2zVuV7B6YgGVSnXPSa1Kq0WLFpw7d441a9awceNGevfuTUREBEuXLsXT05Po6Gg2btzIhg0bGDJkiL5F479xlRe5oxaVSqVSMa6bPy/f6v095o+j/Lr3AVbtKmDnAe3egmEHoP9fuiU3TSzQerdhzdErbDt1FZJjYe/3kJZw//MJUcDMuvSb5rZ7II2Jbt/tz6fvdd4y6N27N2q1mkWLFvHjjz/y8ssv659Xb9++naeeeooXXniBgIAAfH199asLloS/vz9xcXFcuXJFv++/Q2R37NiBt7c348aNIzg4GD8/Py5cuFC0umZmRVY6vNu1Dh8+TEZG4fP77du3o1aradCgfBbysbOzw8PD444lNrdv315knQc7Ozv69OnD3LlzWbJkCb///jtJSUkAWFpa0r17d7766iu2bNnCzp07OXq0/D54/ZfcUYtKp1KpeO8Jf7SKwvwd5xn9xxFQQe9gzwc/uVoNddpBnXbsODaaTzbGciT+AAA/N9xJm/NfQ9QKeOnPB7+WEEbCxsaGPn36MHbsWFJTUxkwYID+NT8/P5YuXcqOHTtwdHRk2rRpJCQklHjxoYiICOrXr0///v2ZMmUKqampjBs3rkgZPz8/YmNjWbx4MS1btmTVqlUsW7asSBkfHx/OnTvHoUOHqF27Nra2tpibmxcp069fPyZMmED//v2ZOHEiV69eZdiwYbz44ot3rLT4IN5++20mTJhA3bp1ad68OfPmzePQoUMsXLgQgGnTpuHu7k5gYCBqtZrffvsNNzc3HBwcmD9/Pvn5+YSGhmJlZcXPP/+MpaVlkefY5U3uqIVBqFQqJnRvRP8wbxQFRv9+hKX7L5bLuU9cTuXF73fz/M/RHIm/ieWtznC/ntISa+lPfsMnCwunxcPcDvDvF3D9bLlcXwhDeOWVV7hx4wadO3cu8jz53XffpUWLFnTu3JlHH30UNzc3evToUeLzqtVqli1bxs2bNwkJCeHVV1/l448/LlLmySef5I033mDo0KE0b96cHTt28N577xUp06tXL7p06UL79u2pWbNmsUPErKysWLduHUlJSbRs2ZJnnnmGDh06MGPGjNL9MO5j+PDhjBo1ijfffJOmTZuydu1aVqxYgZ+fH6Drwf7ZZ58RHBxMy5YtOX/+PKtXr0atVuPg4MDcuXMJDw+nWbNmbNy4kb/++osaNWqUa4y3UymK9Lb5r4sXL+Lp6UlcXNwdkwiI8qUoCuP/PM5Puy6gUsHnzwbwdIuy/cwvJ99k6vpolh28hKKAqUbFi618GPZYPVYevcKEP4+hVaB9fWdm9AvC2twE9syF1YU9YnFtAv7dwf9J3WxoqrINSRNVS1ZWFufOnaNOnTpYWFjc/wAhSuBe/69Kk2ek6VsYlEql4oOnGqOg8POuWN787TBqlYoegbVKfI7UrFy+2XyWedvPkZ2n61DyRDN33uncEK8aVgC82MobNzsLhv1ygM2nrvHcnF38MKAlNRv10C3FeWIFnNsGCcd025bJUKOeLmH7dwePQEnaQgiDkDvqYsgddeXTahXe/VM3ZEutgi/6NOep5vdO1jl5Wn7edYGv/z7NjUzdrEQhdZz4v8f9ae7pUOwxB2Nv8MqCfSRl5ODpZMn8gSHUrWmjezEzCaLX6J5hn/1bN894AXtPqN9ZNxTMp23RiVhElSd31KIiyB21qFbUahUfPdUErVZh8d443lhyCLVKRfeAOyebUBSF1Ufj+WzdSS5czwSgbk1rxnb1p4O/i763a3ECvRz5Y3Br+s/bw4XrmfSatYPvXgom2McJrJwgsJ9uy0qF0+t1Sfv0BkiJg73f6TYZ8iWEqESSqIXRUKtVTOrZFK2i8Ou+i4xccgiVCp5oVpis955P4uNVURyKSwbA2cacUR3r0zu4NiaakvWN9HG25o/BrXl5wT4OxyXz/He7+bJPc7o2vW2hDws7aPqMbsvJ1DWLn1oLp9ZBvYjCcpf2w18jocnT0OaNcvgpCCFEUZKohVFRq1V88nQztAos3X+REYt1d9b1XW35dO1JNpzQjYG2MtPwWjtfBrX11XUKK6UaNuYsHtSKYb8cYGNUIkMWHWD8E42KX93LzAoadNFtigLKbRMrnFoH8UfA0afoMdFrwbu1LuELIcQDkEQtjI5areLTXs1QFPj9wEWG/XIQgHytgkatok9LT0Z28MPF7sGeJVqaaZj9QhAT/zrOz7tief+vE1xOvsnYrv6o77YAiUoFqtvmPg95DRzrgP1tz9OvnYFf+oDaFMW7NVdc2hFlF86jYa3KvLCJqBzlObuVEOX1/0kStTBKGrWKz55phqIo/HHwEgAR/q6M6dqAei625XYdE42aD59qgoeDJZ+tjWbuP+e4nJLF588GlGwxEmtn3epdt9GmXiHbzhfL1BhU57bicW4rHsDlXU3xaP8/XTO5LBpiVMzMzFCr1Vy+fJmaNWtiZmZ2z74OQtyLoijk5ORw9epV1Go1ZmZmD3Q+6fVdDOn1bTzytQrLDl7Cp4aVrsNXBVp28CLvLD1Cbr5CSB0n5r4YjL1VyebuzcvXsvtcEmuOXWHd8QSupmXjo7rCY+pDRGgOEqI6gYnq1qdrM1to2gtavAQeLWTYl5HIycnhypUrZGZmGjoUUU1YWVnh7u5ebKIuTZ6RRF0MSdQPr+1nrvH6T/tJy86jnosN8we2pLajVbFls/Py2XHmOmuOXWHDiQT9EDEAWwsTIvxd6dzYjXb1nRn87Rr841cyyPpfauTcNgOba1Ndwm72LFg6VnT1xH0oikJeXt5956QW4n40Gg0mJiZ3bZmRRP2AJFE/3KKupDJw3l7iU7NwsTVn3sCWNPbQLXJ/MyefracSWXMsnr+jEknLLlzeztHKlE6N3OjS1I3wus6YmRT2Qt8Vc53n5uzCRA3/9DbD/eyvcOJPyM/WFQh+BZ6YVqn1FEIYjoyjFuIB+LvbsSyyNQN+2Et0Qhq9Z+9kRIQfB2OT2RJ9lZu5hXdbLrbmdGniRpcmboT4ON11iFgr3xo81tCFv08m8uFxJ77pNxce/wyO/AYHFkCLFwsLXz4EMZuheT+wcang2gohjJ3cURdD7qgFQMrNXF7/aT87Y64X2V/b0ZKut5JzoKfj3XuI/8fJ+FS6fvkPigLLhrQm0OtWU3fBr2BBE9mfkbp1tQOeh56zyqs6QggjUpo8Y/DVs2bOnImPjw8WFhaEhoayZ8+ee5ZPTk4mMjISd3d3zM3NqV+/PqtXry5S5tKlS7zwwgvUqFEDS0tLmjZtyr59+yqyGqIasrc0Zf7LLekb4oW/ux2R7euyclgb/nmnPeO6NSLI26nESRqgoZsdvW4tOPLJmpMotyfo259j1XkUaofonl0XSDgO69+Fi/sLE7sQ4qFg0KbvJUuWMGrUKGbPnk1oaCjTp0+nc+fOREdH4+JyZ5NfTk4OHTt2xMXFhaVLl1KrVi0uXLiAg4ODvsyNGzcIDw+nffv2rFmzhpo1a3L69GkcHaWjjig9cxMNk59uWm7nG9WxPisOX2b3uSQ2RyfyWMNi1tht9qxuu92+ebB3Luz4WjfveKOndFutYN0a3EKIasugTd+hoaG0bNlSv9aoVqvF09OTYcOGMWbMmDvKz549mylTpnDy5ElMTYsfNjNmzBi2b9/OP//8U+a4pOlbVKTJq6P4dlsMDVxtWT2ibckmQTm1Do4s0c14lptRuN/WAxo9CY16gGeoJG0hqogq0fSdk5PD/v37iYgonDdZrVYTERHBzp07iz1mxYoVhIWFERkZiaurK02aNGHSpElFhlKsWLGC4OBgnn32WVxcXAgMDGTu3LkVXh8hSmrIo/WwtzQlOiGNPw5cvP8BoFu565kf4J2z0GchNO2tG4+ddhl2z4Z5XWCaP6x6C87/C1oZXiREdWGwRH3t2jXy8/NxdS3a9Ofq6kp8fHyxx8TExLB06VLy8/NZvXo17733Hp9//jkfffRRkTKzZs3Cz8+PdevWMXjwYIYPH86CBQvuGkt2djapqan6LS0trXwqKUQx7K1MiWxfF4BpG06RlVuKpGpqCf5PQK+58PYZ6LsYAvqCuT2kx+uax+d3g88bwKn1FVQDIURlqlLDs7RaLS4uLsyZMweNRkNQUBCXLl1iypQpTJgwQV8mODiYSZMmARAYGMixY8eYPXs2/fv3L/a8kydP5v3336+0egjxUpgP87ef53JKFgt2nOd/j9Qt/UlMLaBBV92WlwPntsKJ5XByFWRcBUfvwrKXDuj2+bTVLTIihKgyDHZH7ezsjEajISEhocj+hIQE3Nzcij3G3d2d+vXro9EUzsHs7+9PfHw8OTk5+jKNGjUqcpy/vz+xsbF3jWXs2LGkpKTotxMnTpS1WkKUiIWphlGdGgAwc/MZkjNzHuyEJmbg1xGemglvnYYBq6Bmg8LXt38Ji3rDv18U7tNqpQe5EFWAwRK1mZkZQUFBbNq0Sb9Pq9WyadMmwsLCij0mPDycM2fOFFmR5NSpU0XmUg0PDyc6OrrIcadOncLb25u7MTc3x87OTr/Z2pbfog9C3E3PwFo0dLMlNSuPb7acLb8Ta0zBp03RfQ6eYO+lS+YFolfDlwGw6k1dJ7WcDIQQxsegXURHjRrF3LlzWbBgAVFRUQwePJiMjAwGDhwIwEsvvcTYsWP15QcPHkxSUhIjRozg1KlTrFq1ikmTJhEZGakv88Ybb7Br1y4mTZrEmTNnWLRoEXPmzClSRghjoFGrGN2lIQDzd5znUvLNirtYp49g5BGo3bJw35kNkHwB9n6nW5bz0zrwU0/Y+Q1cOy1320IYCYM+o+7Tpw9Xr15l/PjxxMfH07x5c9auXavvYBYbG4v6tuEmnp6erFu3jjfeeINmzZpRq1YtRowYwejRo/VlWrZsybJlyxg7diwffPABderUYfr06fTr16/S6yfE/TzaoCatfJ3YFZPEtPWn+Lx3QMVd7L+LA3T6GPw66xL26Y2QEgtn/9Zt68aCg7fuDtyvkzzbFsKAZArRYsg4alGZDscl89TM7ahUsHp4W/zd7So/CEWBa6fg9AZd4r6wA/Jve26uMdc1p4f+TzdUTAjxQKrEOGohhE6ApwPdmrqjKPDp2pOGCUKl0nU+az0UXvoT3jkHz/0CwS/rnm3nZ8PZTZB8W6fMmzcgMUqayIWoYFVqeJYQ1dVbnRuw7ng8W6KvsuPsNVrXdTZsQOY20PBx3aYocDUaTq2Bht0KyxxfBivf0M2K1vvu8xQIIR6M3FELYQTqOFvTN8QLgE9vX7DDGKhU4NIQ2rwBdh6F+9MTdU3i7rc9V795A5ZHQtRKyMms/FiFqIbkjloIIzG8gx9/HLjI4YsprDp6hSeaedz/IEN6dAyEDQXltpnVTm+AQz/rNhNLqNseGjyum5TF2sCtBEJUUXJHLYSRqGlrzqB2vgBMWRdNbr72PkcYAXMbsLAv/N6lEYQO1j3XzrupG6u9YihM9YMfusLuOZCWcPfzCSHuIIlaCCPyaltfnG3MuHA9k1/23H02PaPl1gS6fqIbs/36v/Do/5Hn0hQULcTugDVvw7SGsKC7bunOjOuGjlgIoyeJWggjYmNuwogOfgB8ufE06dl5Bo6ojFQq8mo25ifzPgRfG0/rrK/4MPcFEu1uJe1z22DlSN2d9ubJho5WCKMmiVoII/NciBc+Nay4npHD3G0xhg6nTHbFXOeJr//lvT+Pk5yZS46NB9/nP85T2e+TO/QQREwEt2a659tOvoUHplyCI79CtqxgJ0QBSdRCGBlTjZq3O+umFp37TwyJaVkGjqjkLiXfJHLRAZ6bs4uT8WnYW5ry4VON2fZOe5xtzLmSksVfsaa6HuSv/wND9xcd8nX0N/hjEPz6kuEqIYSRkUQthBF6vKkbAZ4OZObk89Wm04YO576ycvP5cuNpOny+hVVHrqBWwYutvNny1qO8GOaDlZkJA8N9AJizLaZw+JlzPV2HtAKWDuBUF/y7F+5LS4Clr+iW78x7wFXGHmILdpyn2cR1HLuUYuhQRClJohbCCKlUKsZ21d1V/7Injpir6QaOqHiKorDm6BU6fL6VLzaeIitXS0gdJ1YOa8uHPZrgaG2mL/tCqDdWZhpOxqex9dTV4k8YNACG7YfA2+6oT/wJx5bC4udhmj+sG6ebgEWUysLdF0jNymPZwUuGDkWUkiRqIYxUK98atG9Qk3ytwtT1xpeYouPT6PfdbgYvPMCl5Jt42Fsw4/lAlrzWikYed85Xbm9lynMtdZO6zLnXs3eVCjS3TfHg0wZaRYKNK2Reg50zYGYIfNcRDvwE2cb5IcaYJGXkcCpB93Pad+GGgaMRpSUTnghhxEZ3bciWU1dZfTSe8X8eQ6NWkZevkKdVyMvX6v699XVuvkK+VrcvN19bWE6r+7qmrTm+ztbUdbHB19kG35rWuNtboPrvqlr3kZKZyxcbT/HTrgvkaxXMTNS8/khdBj9SF0szzT2PfbmNDwt2nmfH2escu5RCk1r29ywPgGsj6DIJOn4Ap9fDwZ/g1Dq4uEe3rR0DjXtCi/5QO/jOVcIEe88n6b8+fimFrNx8LEzv/V4J41GmRB0XF4dKpdKv+LFnzx4WLVpEo0aNeO2118o1QCEeZg3d7OjVojZL91/kx50XHuhcJ+PT+Of0tSL7LE011HG2xremNXVr2uj/reNsjbV50T8P+VqFxXtjmboumhuZuQB0aezGuG7+eDqVbAnM2o5WPNHMnT8PXebbbTF83Tew5BXQmBTOP54WD4cWwcGfIemsLnkf/AlqNoTAFyGgL1jXKPm5q7k95woTdZ5W4XBcMqG+8vOpKsqUqJ9//nlee+01XnzxReLj4+nYsSONGzdm4cKFxMfHM378+PKOU4iH1nvdGuFub0F2nhYTtQoTjfrWvypM1Wo0ahWmGt1+/ddqdWFZjQq1SkV8yk1irmZw9moGMdfSib2eyc3cfE5cSeXEldQ7rutmZ4FvTV0S93ayZtnBS/py9V1tmNC9MeH1Sj8t6GvtfPnz0GVWHbnMO50blDjJF2HrBm1H6XqPX9ihS9LHl8PVk7B+HDh6F+2Q9pDbfU43sYy1mYaMnHz2x96QRF2FlClRHzt2jJCQEAB+/fVXmjRpwvbt21m/fj2vv/66JGohypG9lSlvdmpQ7ufNzdcSm5RJzNUMYq6m6/69ls7ZqxkkZeQQn5pFfGoWO84Wzh5mZ2HCGx3r80Irb0w1Zevi0tjDnrZ+zvxz+hrf/3uOiU82LnslVCrwCddtXT+Fo0t1vcPrdykss3sOZFyFFi+Bg2fZr1VFpWblcuKy7gNWv1bezNkWw/7z8py6KilTos7NzcXc3ByAjRs38uSTTwLQsGFDrly5Un7RCSEqjKlGTd2aNtStaQO4FnktOTNHd+d9NZ2Yaxmcu5pBbUdLBj9alxo25g987dfa+fLP6Wss2RvHiA5+RXqHl5mFPbR8RbcV0ObD9i8h9SI4138oE/X+CzfQKuDlZEW3pu66RB17A61WQa2W5/lVQZkSdePGjZk9ezbdunVjw4YNfPjhhwBcvnyZGjWkOUWIqs7ByowgbzOCvB0r5Pxt6jnTyN2OE1dS+WnXBYbfmja13CkKdPoAjv4O/k8U7t/+JcTuhmbP6u6+TS0r5vpGoOD5dEgdJxp52GFhqiY5M5eYaxnUc7G5z9HCGJSp7erTTz/l22+/5dFHH6Vv374EBOjWo12xYoW+SVwIIe5GpVLxv0d0U4cu2HGerNz8+xxRRhoTaNIL+i4qTMaKouuEFr0KfhsAU/xg+RA4u1l3B17NFCTq0DpOmGrUBNR2AGD/haR7HCWMSZkS9aOPPsq1a9e4du0aP/zwg37/a6+9xuzZs8stOCFE9fV4U3dqOVhyPSOH3w9crLwLq1Tw7HwIHwl2tSEnDQ4thJ966CZUWTsWLh3QJfQq7mZOPkcuJgMQWkfX2lnQSrJfxlNXGWVK1Ddv3iQ7OxtHR90bfuHCBaZPn050dDQuLi7lGqAQonoy1ah5pU0dAOZuiyFfW4mJ0bUxdHwfRh6FgWsgaCBYOEB6Auz6Bua2hxktYcunkFQ1F0YBOBh7g9x8BTc7CzyddC0KwT66v9sy8UnVUaZE/dRTT/Hjjz8CkJycTGhoKJ9//jk9evRg1qxZ5RqgEKL66tPSE3tLU85fz2TDifjKD0CtBu/W0H06vHUanvtFN3mKiQVcPw1bJsFXgTC3g27O8Spm923PpwsmtmnhpUvUMbd69wvjV6ZEfeDAAdq2bQvA0qVLcXV15cKFC/z444989dVXpT7fzJkz8fHxwcLCgtDQUPbs2XPP8snJyURGRuLu7o65uTn169dn9erVxZb95JNPUKlUjBw5stRxCSEqlrW5CS+00k0rOnvrbYt1GIKJmW4ylWfnw9tnoMdsqPsYqNSQdgWsaxaWPb0Rrp81WKgldXtHsgIOVmb6TmQH5K66SihTos7MzMTW1haA9evX8/TTT6NWq2nVqhUXLpRu9qQlS5YwatQoJkyYwIEDBwgICKBz584kJiYWWz4nJ4eOHTty/vx5li5dSnR0NHPnzqVWrVp3lN27dy/ffvstzZo1K30lhRCVon9rH8xM1ByKS2avsYzvNbeF5n3hxWUw6iT0+l539w2Qn6dbivPrFrqe40YqOy+fA7G6n2crX6cirwV7S/N3VVKmRF2vXj2WL19OXFwc69ato1OnTgAkJiZiZ3fnZPz3Mm3aNAYNGsTAgQNp1KgRs2fPxsrKqkgntdv98MMPJCUlsXz5csLDw/Hx8eGRRx7R9zwvkJ6eTr9+/Zg7d67+WboQwvi42FrQq4Xug/acbUZ4l2rrCt5hhd9nXgf3Zro77FpBhfu3TYU1o+HcNl0yN7CjF1PIztPiZG12a6x8oRa3ErXcUVcNZUrU48eP56233sLHx4eQkBDCwnT/idevX09gYMnn7s3JyWH//v1EREQUBqRWExERwc6dO4s9ZsWKFYSFhREZGYmrqytNmjRh0qRJ5OcXHVYRGRlJt27dipxbCGGcXm3ri0oFG6MSOZOYZuhw7s3WFV76E944XrjKl6LA/vmwezYs6A5Tbw35Orkacm8aJEz982kfpzsWXim4oz58MZmcPG2lxyZKp0wTnjzzzDO0adOGK1euFLmT7dChAz179izxea5du0Z+fj6urkVnRXJ1deXkyZPFHhMTE8Pff/9Nv379WL16NWfOnGHIkCHk5uYyYcIEABYvXsyBAwfYu3dvieLIzs4mOztb/31ampH/oRCimqlb04aO/q6sP5HA3G3n+PSZKvC4yuS2GdoUBbp+BidXQvRquJmkG/J1aCGYWkG9DtCwO9TvBJaV08JX3PPpAnWcrXGyNiMpI4fjl1MI9JJWR2NW5mUu3dzccHNz4+JF3fjH2rVrV8pkJ1qtFhcXF+bMmYNGoyEoKIhLly4xZcoUJkyYQFxcHCNGjGDDhg1YWFiU6JyTJ0/m/fffr+DIhRD38r9HfFl/IoFlBy/xZqf6uNiV7PfXKKjVhSt75edB7E5d0o5aqZu+NOov3aY21SXtxk/ryprbVkg4efla/TjpUN87E7VKpaKFlyMboxLYf+GGJGojV6amb61WywcffIC9vT3e3t54e3vj4ODAhx9+iFZb8mYUZ2dnNBoNCQlFhz0kJCTg5uZW7DHu7u7Ur18fjaZwLVV/f3/i4+P1TemJiYm0aNECExMTTExM2Lp1K1999RUmJiZ3NJEDjB07lpSUFP124sSJEtdBCFE+grydCPJ2JCdfy7wd5w0dTtlpTKBOW90iIW8cg9e2QLu3dUtwanPh1FpY9hpMqQcpFTPRy4krqaRn52FrYUJDt+L7DcnEJ1VHmRL1uHHjmDFjBp988gkHDx7k4MGDTJo0ia+//pr33nuvxOcxMzMjKCiITZs26fdptVo2bdqkf+79X+Hh4Zw5c6bIB4JTp07h7u6OmZkZHTp04OjRoxw6dEi/BQcH069fPw4dOlQkwRcwNzfHzs5OvxX0aBdCVK7/tdNNK/rzrgukZxu+Q9YDU6nAIxAeexcid8OQXdDuHahRDxx9wL52Ydk9c3Urf+Vl3/V0JVXQ7N3SxwnNXRbeuH3iE4MOixP3Vaam7wULFvDdd9/pV80CaNasGbVq1WLIkCF8/PHHJT7XqFGj6N+/P8HBwYSEhDB9+nQyMjIYOHAgAC+99BK1atVi8uTJAAwePJgZM2YwYsQIhg0bxunTp5k0aRLDhw8HwNbWliZNmhS5hrW1NTVq1LhjvxDCuET4u+Jb05qYqxks3hPLq219DR1S+XLxh8fGQfv/g8zb5trOyYQNEyA3A17dBLWDH+gyu+/xfLpA01r2mGpUXE3L5uKNm2VbF1xUijLdUSclJdGwYcM79jds2JCkpNJN9N6nTx+mTp3K+PHjad68OYcOHWLt2rX6DmaxsbFFls709PRk3bp17N27l2bNmjF8+HBGjBjBmDFjylIVIYQRUatVDLqVnH/49xy5+dW0R7JKBda3rTSYlwVBA8ArrOiQr/XvwYrhELO1xAuGaLUKe8/fP1FbmGpoUssegH2yQIdRUyllaPMIDQ0lNDT0jlnIhg0bxp49e9i923gnASiJixcv4unpSVxcHLVr177/AUKIcpOVm0+bTzdzLT2bL/oE0DPwIf0dzMvRDfPKStZ9b+0CjZ7STXHq1QrUdz7GA4iOT6Pz9G1Ymmo4MrETppq73499tPIE3/17jn6hXnzcs2kFVELcTWnyTJmavj/77DO6devGxo0b9c+Sd+7cSVxc3F2n8hRCiJKwMNUwMNyHKeui+XZrDD2a17pjHPBDQa2B3gvg2B8QtQIyEmHvXN1m4wr+T0LjHrq78NuS9u5z1wFdZ7F7JWnQPaf+7t9z0qHMyJWp6fuRRx7h1KlT9OzZk+TkZJKTk3n66ac5fvw4P/30U3nHKIR4yLwQ6o2VmYaT8WlsO33N0OEYhloDvo/Ck1/pFgzptxSa9wMLe90qX3vnwvxu8HlDWPUmnPsHtPklej5doGCGsuiENFKzciuyNuIBlKnp+24OHz5MixYtih0CVZVI07cQhvfBXyf4Yfs5wuvVYOGrrQwdjvHIy4FzW+H4Mt1Y7awU/UuKjSuPZn3OhXQ1i19rRSvfGvc4kU67zzYTm5TJjy+H0K5+zfuWF+WjNHmmTHfUQghR0V5u44NGrWL7mescu5Ry/wMeFiZm4NcRenwDb525daf9Alg4kG1dmwvpasw0app7OuiGfJ3bds+OaLJAh/GTRC2EMEq1Ha14opk7AN9uizFwNEZKn7RnwttnWN9IN4y1uacDFnmpsHasbu7xeyzJKQt0GD9J1EIIo/XarQlQVh+9QlxSpoGjMXIaU7bE6+YfD6njpBubHdAHvNtAzfqF5Za9Dr/2h0OLIP2qfuKTg7E3yKuuw+GquFL1+n766afv+XpycvKDxCKEEEU09rCnrZ8z/5y+xvf/nmPik40NHZJRK9KRzL4mPDWzaIG8bDjxJ+RmwonlgIoGHi1427wua3KaER0fSuNaMu+3sSnVHbW9vf09N29vb1566aWKilUI8RAquKtesjeOGxk5Bo7GeF28kcml5Jto1Cp9c/Yd1KbQ/y/d3ONuzQAF1eX9RKp+ZaX5u9RZEAx/RuqSeVZqpcYv7q5Ud9Tz5s2rqDiEEKJYbeo508jdjhNXUvl4dRRTnw24/0EPoYLZyJrUssfG/C5/2tVq3fSktYN184+nXoEzGziz/Q/cru3EJucaHPxZt6lNwTsM/DpDixd1w8KEQcgzaiGEUVOpVLz3RCPUKli6/yJL91fMilNV3e4YXaIOLcH4aT07d2jxEgldv6NF9reMMJ0IrYaAU13dSl/ntsHGCUWPSY7VDRETlUYStRDC6IXVrcHICF2HqHeXH+VUQpqBIzI+BStmhfiUIlHfEuDpQJ7KlD/T6hMfNgGGH4BhB6DzZF3ivv1u+rcBMKUunN1cTpGL+5FELYSoEiLb16NNPWeycrUMWXiAzJxqsAxmOUlMyyLmWgYqlW5py9KyMTfB3123brV+OtEadSFsCHT6sLBgToZuDe3sVN1KYAWiVsLObyDp3INUQ9yFJGohRJWgUauY/lxzXGzNOZOYzrvLj8k6yrfsPadLrg3d7LC3Mi3TOQonPrnHSlpm1jDqJLz+L9i63RbAXFg3Fr5qDjNbwcb34eI+0Mpwr/IgiVoIUWU425jzVd9A1Cr448Alftsnz6sB9txaiKNUz6f/o8QTn6jV4PaflbYaPgF12oFKA1ej4N9p8F0H+LwBrBgG0Wt047pFmZRp9SwhhDCUVr41eLNTA6asi+a9P4/RzNOehm52hg7LoEqzEMfdBN9qMj9+OZWbOflYmhW/jGaxQgbptps34PRGiF4NZzbqVvw68KNuM7GEuu2h7mPgHQ41G+qSvrgv+SkJIaqcwY/UpV39mmTn6Z5XZ2Q/vM+rkzNzOBmv61xXlufTBTzsLXCzsyBPq3D4YnLZTmLpCM2ehWfnwdtn4cXlEPI/sPeCvJu6BL76LZgVBhf+LTwuO12aye9BErUQospRq1V80TsANzsLYq5mMG7Z0Yf2efXe87qmat+a1tS0NS/zeVQqFUG3phMtl/WpTcx0d9CPfwYjj8Dr23Vjt30f1fUirxVcWHbzJPisDuz97sGvWw1JohZCVEk1bMz5+vlANGoVyw9dZvHeOEOHZBCFz6fvv6Tl/QR53epQdv4eHcrKQqUCtya6GdFe+hPeOQdmVoWvX9oPWclg4VB038Jn4d/pcHE/5D+8rSbyjFoIUWW19HHirU4N+HTtSSasOE5AbQcaeTxcz6sLnk8/SEeyAgULdByITUarVVCrVQ98zmKp//P8e8AquHIYavgW7ovZAqfX6zYAMxvwDAWvMKjVQrdZPhzzkkuiFkJUaf9r58uec9fZHH2VyEUH+GtYm7tPoVnNpGfn6dfqfpCOZAX83e2wNNWQcjOXs1fT8XO1feBzlojGBGoH/SeYJ8HEAs7/Cxd26O64z27SbQWcfKFWUOHm1hRMLSsn5kokTd9CiCpNrVbxee/muNtbcO5aBmP/eHieV++/cAOtArUdLfFwePAEZapRE+Bprz+3QTn7QVgk9P1F11T++r/Q5VNo8owuQQMkxcDR32DtGPi+IyzqXfQcV09ViyZzSdRCiCrPydqMGc8HYqJW8dfhyyzcHWvokCpFwfPp8ribLhDsrTvXPkMn6tsVjN1u9To88z0MP6hL3i/8Du3fhfpdwdoFPAILj7l5A2a2hE+8IPu2KWez06CKfZAzikQ9c+ZMfHx8sLCwIDQ0lD179tyzfHJyMpGRkbi7u2Nubk79+vVZvXq1/vXJkyfTsmVLbG1tcXFxoUePHkRHR1d0NYQQBhTk7cQ7XRoA8MHKE/om4eqsYH7vVuXQkaxAUEknPjE0KyeoFwGPvA3PL4a3TumSdoGkGDCzBRsXML+tCX9xP5haH355Hv6Zpmtaz8ko8WUN0Vpj8ES9ZMkSRo0axYQJEzhw4AABAQF07tyZxMTEYsvn5OTQsWNHzp8/z9KlS4mOjmbu3LnUqlVLX2br1q1ERkaya9cuNmzYQG5uLp06dSIjo+RvhhCi6hnU1pcIfxdy8rRELjpAWlauoUOqMFm5+RyOK7/n0wVa3Or5HXMtg+vp2eV23gqnUumGhBWoFQRjLsDANYX7FAXij+omYoleBZveh/ndYLInzG4DK0fBoV/g2pli77q3nrpKnzm7iEuq3FnWVIqBH+aEhobSsmVLZsyYAYBWq8XT05Nhw4YxZsyYO8rPnj2bKVOmcPLkSUxNSzan7dWrV3FxcWHr1q20a9fuvuUvXryIp6cncXFx1K5du3QVEkIYVHJmDt2++pdLyTfp1tSdGc8HolJVUO9lA9p59jp95+7Cxdac3f/XoVzr2HHaVk4npjPnxSA6NXa7/wFVSe5NuHIELu4t3FIv3VnO0hFqt9RtTZ8hxcKTTtO3kpCazStt6vDeE40eKIzS5BmD3lHn5OSwf/9+IiIi9PvUajURERHs3Lmz2GNWrFhBWFgYkZGRuLq60qRJEyZNmkR+fv5dr5OSovvU6eRUfp86hRDGycHKjK9vPa9edfQKP+26YOiQKsSe26YNLe8PIgXDtPbHGnnzd1mYWoJXKLQeCr0XwKgT8MYJ6P0jtB6mG/5lYnFrOtT1sPljSI5l/IpjJKRm4+tszVudGlRqyAYdw3Dt2jXy8/NxdXUtst/V1ZWTJ08We0xMTAx///03/fr1Y/Xq1Zw5c4YhQ4aQm5vLhAkT7iiv1WoZOXIk4eHhNGnSpNhzZmdnk51d2MSTliZr3QpRlbXwcmRM14Z8tCqKj1ZGEejpSNPa9vc/sArZc/7WRCe+5fd8ukALL0d+2RPH/vPVMFEXx76Wbmv0lO77vBxIOKZbAeziHtbe8ODPQ6dRq+Dz3gGlmwe9HBj8GXVpabVaXFxcmDNnDkFBQfTp04dx48Yxe/bsYstHRkZy7NgxFi9efNdzTp48GXt7e/3WqNGDNWkIIQzvlTZ16NTIlZx8LUMW7SflZvV5Xp2Tp9UPnyqPiU7+q2CBjiOXUsjOu3trZbVlYqabUCX0NRI7zmDMqvOAbk30QK/Kn2TFoIna2dkZjUZDQkJCkf0JCQm4uRX/XMTd3Z369euj0RR+ovH39yc+Pp6cnJwiZYcOHcrKlSvZvHnzPZ8BjB07lpSUFP124sSJB6iVEMIYqFQqpjwTQG1HS+KSbjJ66ZFqM7766KUUsnK1OFqZUq+mTbmf36eGFTWszcjJ03LsUmq5n7+qUBSFMX8cJTkzl8Yedgx7zM8gcRi06dvMzIygoCA2bdpEjx49AN0d86ZNmxg6dGixx4SHh7No0SK0Wi3qW0uknTp1Cnd3d8zMdD3+FEVh2LBhLFu2jC1btlCnTp17xmFubo65eeFk9qmpD+9/TCGqE3srU2Y+34JnZu9g7fF4Wk3ehKOVGY5WZjhYmeJgZYq9pe5rx9u+drAyxeHW1xamldvMWRIFz6db+jhVyDSfKpWKFt6ObDiRwIELN/RDth42S/bG8ffJRMxM1HzRpzlmJoa5tzX4PHujRo2if//+BAcHExISwvTp08nIyGDgwIEAvPTSS9SqVYvJkycDMHjwYGbMmMGIESMYNmwYp0+fZtKkSQwfPlx/zsjISBYtWsSff/6Jra0t8fHxANjb22NpWf2mlxNC3F2ApwMTn2zM+D+Pk5CaTUJq6YYcWZiq9Um7f2sf+oZ4VVCkJVcRE538V9CtRL3vQhKD8L3/AdVM7PVMPlypa119u1MD6lfWdKrFMHii7tOnD1evXmX8+PHEx8fTvHlz1q5dq+9gFhsbq79zBvD09GTdunW88cYbNGvWjFq1ajFixAhGjx6tLzNr1iwAHn300SLXmjdvHgMGDKjwOgkhjEu/UG86NnIlPiWL5Mxckm/mkpKZw43M3Fvf55Bya/+NzMKv87UKWbla4nOziE/N4qOVJ3gywANrA84lnq9V2Herk1erCuhIViDYu2DJy2QURamWQ9zuJl+r8NZvh8nIySekjhMvt7l3q2xFM3iiBt2z5Ls1dW/ZsuWOfWFhYezateuu56suz6GEEOXHxdYCF1uLEpdXFIX07DxdIs/MZfjig5y7lsHKI5fp09Jwd9VRV1JJy87DxtwEf/eKWymsSS17zDRqrqVnE5uUiXcN6wq7lrH5/t8Y9pxPwtpMw+fPBqCpqFXESqjK9foWQojKoFKpsLUwxdPJiqa17enT0hPA4OteFyxrGezjWKEJxMJUQ5Naug8C+x6WYVpAdHwaU9edAmB890Z4Olnd54iKJ4laCCFKoFeL2pioVRyMTSY63nBzLVTG8+kCBcO0quXEJ8XIydPyxpJD5ORr6dDQhd7BnoYOCZBELYQQJVLT1pwIf13fmcV7DbM6l6Io+h7foeW4EMfdFMz7/bBMfPLVptOcuJKKo5Upk3s1NZrn8pKohRCihPqE6O6wlh28RFZu5U8EciYxnRuZuViYqmlaq+JnWisYlnUqMa1aTRhTnAOxN/hmyxkAJvVsWqr+DBVNErUQQpRQO7+aeNhbkJyZy7rj8ZV+/V237qZbeDlWypjemrbmeNewQlHgYDVu/s7MyePNXw+jVaBnYC26NnU3dEhFSKIWQogS0qhVPHvrueXiPZXfqez2hTgqS5VZn/oBfLLmJOeuZeBmZ8HEJxsbOpw7SKIWQohS6N3SE5UKdsZc5/y1ylvjXvd8uvI6khUoSNT7qmmi3nbqKj/u1K2wNuXZZthblmz55MokiVoIIUqhloMl7fxqArBkX+XdVccmZZKQmo2pRqXv5FUZgr11HwoOxSWTl6+ttOtWhpTMXN5ZegSA/mHetL31vhobSdRCCFFKfW91Klu6/yK5lZS8CsZPB9R2qNT5x/1cbLC1MCEzJ5+TBhyWVhEmrDhGfGoWvs7WjOnqb+hw7koStRBClNJjDV1xtjHjalo2m08mVso1d8dU/vNpALW68A5+3/mkSr12RVp15ArLD11GrYKpBlhjujQkUQshRCmZmajpFaRbOrcyZirLyM5j6yndB4LKTtRw27zfscmVfu2KkJiaxbvLjwIw5NF6lfoooSwkUQshRBn0udX7e0t0IldSblbotb7depZr6Tl417CidV3nCr1WcQo6lO2vBnfUBWtM38jMpZG7HcM7GGaN6dKQRC2EEGXgW9OG0DpOaBX4bd/FCrvOlZSbzPknBoAxXRoaZE3kAE8HNGoVl1OyuJxcsR9KKpp+jWmNYdeYLg3jj1AIIYzUc7c6lS3ZG4dWWzGr9k1ZF01WrpaWPo50aeJWIde4H2tzE/zddesx76/Cw7RuX2P6rc71aeBmuDWmS0MStRBClFHXJu7YWZhwKfkm/565Vu7nP3oxhT8OXALg3W6NDDr3dMEwrW+2nCXmarrB4igrRVF45/dba0z7OPFKG19Dh1RikqiFEKKMLEw19AysBejuqsuToih8tEp399ejuQcBng7lev7S6hvihZ2FCVFXUun21b8s3H0BRamYVoSKsOZYPLtikrAwVTPVCNaYLg1J1EII8QD6tPQCYP2JeK6nZ5fbedefSGD3uSTMTdS83aVhuZ23rBq42bJ2ZDta163Bzdx8xi07xqsL9nE1rfzqXFGycvOZtDoKgP+1q4tXDcOvMV0akqiFEOIBNPKwI6C2Pbn5ir6Z+kHl5Gn5ZM1JAF5tW4daDpblct4H5eFgyc+vhPJuN3/MNGo2nUyky/RtbDyRYOjQ7un7f89x8cZN3Ows+N8jVafJu4AkaiGEeEAFd9W/7I0tl+bgn3dd4Ny1DJxtzBj8aL0HPl95UqtVvNrWlxXDwmnoZsv1jBxe/XEfY/84QkZ2nqHDu0NCahYzN+uWrxzTtSFWZiYGjqj0JFELIcQDerK5B1ZmGmKuZjzw4hXJmTl8uek0AG92aoCNuXEmloZudiyPDGdQ2zqoVPDLnji6ffWP0S2HOWVdNJk5+QR6OfBUcw9Dh1MmkqiFEOIB2Zib8EQz3RrGv+yJfaBzff33GVJu5tLA1ZbetyZVMVYWphrGdWvEwldDcbe34Pz1TJ6ZvZMvNpwyigU8jlxMZul+3Rj38U8Yttf8g5BELYQQ5eC5EF3z9+qjV0i5mVumc5y/lsGPO88DMK6bf5Xpmdy6rjNrR7TjyQAP8rUKX246zTOzd3KuEpcB/S9FUfjgL12v+Z6BtQg08mlC78UoEvXMmTPx8fHBwsKC0NBQ9uzZc8/yycnJREZG4u7ujrm5OfXr12f16tUPdE4hhHgQgZ4O1He1IStXy4pDZetU9smak+TmKzxSvybt6hvnkot3Y29lyld9A/nyuebYWphwKC6Zx7/8h0W7y+e5fWmtPHKFfRduYGmqYbQR9Jp/EAZP1EuWLGHUqFFMmDCBAwcOEBAQQOfOnUlMLH5FmpycHDp27Mj58+dZunQp0dHRzJ07l1q1apX5nEII8aBUKhXP3epUVpaFOnbHXGft8XjUKt3ddFX1VPNarB3Zjla+TtzMzef/lh1l0I/7uFaOQ9fuJys3X99rfvCjdXGzt6i0a1cEgyfqadOmMWjQIAYOHEijRo2YPXs2VlZW/PDDD8WW/+GHH0hKSmL58uWEh4fj4+PDI488QkBAQJnPKYQQ5aFnYC3MNGqOX07l2KWUEh+n1Sp8fGucb98QL+q7Vo2pLe+mloMli15txbjHdcO4NkbphnFtiqqcYVxztsVwKfkmHvYWDGpb9YZj/ZdBE3VOTg779+8nIiJCv0+tVhMREcHOnTuLPWbFihWEhYURGRmJq6srTZo0YdKkSeTn55f5nEIIUR4crc3083GXplPZn4cvceRiCjbmJrzRsX5FhVep1GoVg9r58ufQcBq42nItPYdXFuxj8uqoCm0Kj0/JYtaWswCMedzfqNeZLimDJupr166Rn5+Pq6trkf2urq7Ex8cXe0xMTAxLly4lPz+f1atX89577/H555/z0Ucflfmc2dnZpKam6re0tLRyqJ0Q4mH0XEtdT+0Vhy6TmXP/ccU3c/L5bG00AEPa18XZxrxC46ts/u52/Dk0nFfb1AHg220xTF0fXWHX+2ztSW7m5hPs7Uj3Wz3xqzqDN32XllarxcXFhTlz5hAUFESfPn0YN24cs2fPLvM5J0+ejL29vX5r1KhROUYshHiYtPKtgXcNK9Ky81h15Mp9y3//bwxXUrKo5WDJy+F1KiHCymdhquHdJxrxYY8mAMzcfJZvtpwp9+scjL3BHwd1HfnGd6+6w7H+y6CJ2tnZGY1GQ0JC0ecWCQkJuLkVv5ybu7s79evXR6MpbM7w9/cnPj6enJycMp1z7NixpKSk6LcTJ048YM2EEA8rtVqlH/98v4U6EtOy+OZWM+07XRpgYVr1m2nv5cVW3oztquuB/dnaaBbsOF9u51YUhQ9uLWHZq0VtmtV2KLdzG5pBE7WZmRlBQUFs2rRJv0+r1bJp0ybCwsKKPSY8PJwzZ86g1RYOpj916hTu7u6YmZmV6Zzm5ubY2dnpN1vbqt2RQwhhWM8G1UajVrHvwg1OJ9z9UdoXG06RmZNPc08HngyomrNmldb/HqnL8Md006JOWHGc3/aVz6pjKw5f5mBsMlZmGt7p0qBczmksDN70PWrUKObOncuCBQuIiopi8ODBZGRkMHDgQABeeuklxo4dqy8/ePBgkpKSGDFiBKdOnWLVqlVMmjSJyMjIEp9TCCEqkoudBY81dAHufld9Mj5V/9p7T/hXm2baknijY319M//o34+U6BHBvWTm5OmHY0W2r4erXdUejvVfBp9Etk+fPly9epXx48cTHx9P8+bNWbt2rb4zWGxsLGp14ecJT09P1q1bxxtvvEGzZs2oVasWI0aMYPTo0SU+pxBCVLS+IZ5sOJHA7wcu8naXBpibFDZrK4rCx6ui0CrQrak7Qd5OBoy08qlUKt57wp+M7DyW7Itj5JKDWJlpaH/rw01pfbu18Dn/K22q33N+lVKVVv6uJBcvXsTT05O4uDhq165t6HCEEFVQXr6WNp9uJj41ixnPB/JEs8Km7c3RiQyct1c3xnjUI1VufeTykq9VGLnkEH8dvoy5iZr5A0MIq1ujVOe4nHyTxz7fQlaulpnPt6BbFenpXZo8Y/CmbyGEqI5MNGqeDdb9AV68p7D5Oy9fy8erdJObDAj3eWiTNIBGrWJa7wAi/F3IztPy6oK9pV5969O1J8nK1RLi48TjTYvvMFzVSaIWQogK0jvYE5UK/j1zjbikTEA3veiZxHQcrUyJbG9ca00bgqlGzYznWxBerwYZOfkMmLeXqCupJTp2/4Ub/HnoMipV9RqO9V+SqIUQooJ4OlnRpp4zoOtUlpaVyxcbTgEwMqI+9pamhgzPaFiYapjzYjAtvBxIuZnLi9/v5uzV9Hseo9UWDsd6Nqg2TWrZV0aoBiGJWgghKlDBQh2/7Y/j67/PcD0jB9+a1jwf6mXgyIyLtbkJ8waG0MjdjmvpObzw3W59K0Rxlh+6xOG4ZGzMTXirc/UajvVfkqiFEKICRTRywcnajITUbOZsiwHg/7r6Y6qRP7//ZW9pyk+vhFC3pjVXUrJ44fvdJKZm3VEuIzuPT9cWDsdysa1ew7H+S/6nCCFEBTI30dCrReEyvK3r1qCDf9mGIT0MatiYs/DVVng6WXLheib9vttNUkZOkTLfbj1LQmo2Xk5WvNzGxzCBViJJ1EIIUcH63Gr+Vt1aa7q6dnoqL272Fix6tRWuduacTkyn/w97SM3KBeDijUy+LWiZeLxhkfHp1ZUkaiGEqGD1XGz4pl8Lvn0hiMYe1bfTU3nydLJi4auhOFmbcfRSCq/M36ufgSw7T0srXyc6N66ew7H+SxK1EEJUgsebutPpIUks5aWeiy0/vhyCrYUJe8/foPe3O1l55IpuONYTjR+alglJ1EIIIYxWk1r2zB/YEiszDccu6cZXP9fSk0YedgaOrPJIohZCCGHUgrydmPtSMGYmapyszXizU/UejvVfBl+UQwghhLif8HrO7BjzGADONuYGjqZySaIWQghRJTxsCbqANH0LIYQQRkwStRBCCGHEJFELIYQQRkwStRBCCGHEJFELIYQQRkx6fRdDq9UCcOXKFQNHIoQQojoqyC8F+eZeJFEXIyEhAYCQkBADRyKEEKI6S0hIwMvr3muTqxRFUSopniojLy+PgwcP4urqilr9YE8H0tLSaNSoESdOnMDW1racIhRCCFHZyvPvuVarJSEhgcDAQExM7n3PLIm6gqWmpmJvb09KSgp2dg/P3LRCCFHdGOrvuXQmE0IIIYyYJGohhBDCiEmirmDm5uZMmDABc/OHc45aIYSoLgz191yeUQshhBBGTO6ohRBCCCMmiVoIIYQwYpKohRBCCCMmiboCzZw5Ex8fHywsLAgNDWXPnj2GDkkIIUQpbdu2je7du+Ph4YFKpWL58uWVen1J1BVkyZIljBo1igkTJnDgwAECAgLo3LkziYmJhg5NCCFEKWRkZBAQEMDMmTMNcn3p9V1BQkNDadmyJTNmzAB008V5enoybNgwxowZY+DohBBClIVKpWLZsmX06NGj0q4pd9QVICcnh/379xMREaHfp1ariYiIYOfOnQaMTAghRFUjiboCXLt2jfz8fFxdXYvsd3V1JT4+3kBRCSGEqIokUQshhBBGTBJ1BXB2dkaj0ejXtS6QkJCAm5ubgaISQghRFUmirgBmZmYEBQWxadMm/T6tVsumTZsICwszYGRCCCGqmnuvVi3KbNSoUfTv35/g4GBCQkKYPn06GRkZDBw40NChCSGEKIX09HTOnDmj//7cuXMcOnQIJycnvLy8Kvz6MjyrAs2YMYMpU6YQHx9P8+bN+eqrrwgNDTV0WEIIIUphy5YttG/f/o79/fv3Z/78+RV+fUnUQgghhBGTZ9RCCCGEEZNELYQQQhgxSdRCCCGEEZNELYQQQhgxSdRCCCGEEZNELYQQQhgxSdRCCCGEEZNELYQQQhgxSdRCiEqlUqlYvny5ocMQosqQRC3EQ2TAgAGoVKo7ti5duhg6NCHEXciiHEI8ZLp06cK8efOK7DM3NzdQNEKI+5E7aiEeMubm5ri5uRXZHB0dAV2z9KxZs+jatSuWlpb4+vqydOnSIscfPXqUxx57DEtLS2rUqMFrr71Genp6kTI//PADjRs3xtzcHHd3d4YOHVrk9WvXrtGzZ0+srKzw8/NjxYoV+tdu3LhBv379qFmzJpaWlvj5+d3xwUKIh4kkaiFEEe+99x69evXi8OHD9OvXj+eee46oqCgAMjIy6Ny5M46Ojuzdu5fffvuNjRs3FknEs2bNIjIyktdee42jR4+yYsUK6tWrV+Qa77//Pr179+bIkSM8/vjj9OvXj6SkJP31T5w4wZo1a4iKimLWrFk4OztX3g9ACGOjCCEeGv3791c0Go1ibW1dZPv4448VRVEUQHn99deLHBMaGqoMHjxYURRFmTNnjuLo6Kikp6frX1+1apWiVquV+Ph4RVEUxcPDQxk3btxdYwCUd999V/99enq6Aihr1qxRFEVRunfvrgwcOLB8KixENSDPqIV4yLRv355Zs2YV2efk5KT/OiwsrMhrYWFhHDp0CICoqCgCAgKwtrbWvx4eHo5WqyU6OhqVSsXly5fp0KHDPWNo1qyZ/mtra2vs7OxITEwEYPDgwfTq1YsDBw7QqVMnevToQevWrctUVyGqA0nUQjxkrK2t72iKLi+WlpYlKmdqalrke5VKhVarBaBr165cuHCB1atXs2HDBjp06EBkZCRTp04t93iFqArkGbUQoohdu3bd8b2/vz8A/v7+HD58mIyMDP3r27dvR61W06BBA2xtbfHx8WHTpk0PFEPNmjXp378/P//8M9OnT2fOnDkPdD4hqjK5oxbiIZOdnU18fHyRfSYmJvoOW7/99hvBwcG0adOGhQsXsmfPHr7//nsA+vXrx4QJE+jfvz8TJ07k6tWrDBs2jBdffBFXV1cAJk6cyOuvv46Liwtdu3YlLS2N7du3M2zYsBLFN378eIKCgmjcuDHZ2dmsXLlS/0FBiIeRJGohHjJr167F3d29yL4GDRpw8uRJQNcje/HixQwZMgR3d3d++eUXGjVqBICVlRXr1q1jxIgRtGzZEisrK3r16sW0adP05+rfvz9ZWVl88cUXvPXWWzg7O/PMM8+UOD4zMzPGjh3L+fPnsbS0pG3btixevLgcai5E1aRSFEUxdBBCCOOgUqlYtmwZPXr0MHQoQohb5Bm1EEIIYcQkUQshhBBGTJ5RCyH05EmYEMZH7qiFEEIIIyaJWgghhDBikqiFEEIIIyaJWgghhDBikqiFEEIIIyaJWgghhDBikqiFEEIIIyaJWgghhDBikqiFEEIII/b/ejo2wAhfdUIAAAAASUVORK5CYII=\n", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from previous_chapters import plot_losses\n", + "\n", + "\n", + "epochs_tensor = torch.linspace(0, num_epochs, len(tracking[\"train_losses\"]))\n", + "plot_losses(\n", + " epochs_seen=epochs_tensor,\n", + " tokens_seen=tracking[\"tokens_seen\"],\n", + " train_losses=tracking[\"train_losses\"],\n", + " val_losses=tracking[\"val_losses\"],\n", + " label=\"loss\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "7f8bc233-895f-46d5-8e01-202b991cd60c", + "metadata": { + "id": "7f8bc233-895f-46d5-8e01-202b991cd60c" + }, + "source": [ + "- As we can see above, the loss continues to improve, which is a good sign\n", + "- Based on the downward slope, one might be tempted to train the model a bit further (and readers are encouraged to try this), but not that DPO is prone to collapse, where the model may start generating nonsensical responses\n", + "- Next, let's take a look at the reward margins:" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "id": "dmbq6ruuf0Cl", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 307 + }, + "id": "dmbq6ruuf0Cl", + "outputId": "c2886c16-57da-41bd-c9f0-e936da9d9e4d" + }, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAeoAAAEiCAYAAAA21pHjAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABn+ElEQVR4nO3deVhUZfvA8e8My7CDoGyyKyIq4o6IlaaJS+bSYmalWfZaLpkt5luZ1q+stLKyslV7K9PMNFNTcU/FXRQ3XFhVFhXZ95nz+2NkkEQFBAbw/lzXXHLOec4592Fw7nnOeRaVoigKQgghhKiX1MYOQAghhBA3JolaCCGEqMckUQshhBD1mCRqIYQQoh6TRC2EEELUY5KohRBCiHpMErUQQghRj0miFkIIIeoxSdRCCCFEPSaJWohGJD4+HpVKRVRUlLFDEULUEEnUQtQzKpXqpq+ZM2caO0QhRB0yNXYAQojykpOTDT8vXbqUGTNmEBMTY1hnY2NjjLCEEEYiNWoh6hlXV1fDy97eHpVKZVh2dnbm448/xsPDA41GQ4cOHVi3bt0Nj6XVahk7diytW7cmMTERgD///JNOnTphYWGBn58fs2bNoqSkxLCPSqXiu+++Y9iwYVhZWeHv78+qVasM269cucKoUaNo1qwZlpaW+Pv7s3DhwhvG8PvvvxMUFISlpSVOTk707duX3Nxcw/bvvvuOwMBALCwsaN26NV9++WW5/ZOSknjkkUdwcHDA0dGRIUOGEB8fb9g+ZswYhg4dyty5c3Fzc8PJyYkJEyZQXFxc6d+5EPWaIoSotxYuXKjY29sblj/++GPFzs5O+fXXX5WTJ08qr776qmJmZqacOnVKURRFiYuLUwDl0KFDSkFBgTJs2DClY8eOSlpamqIoirJ9+3bFzs5OWbRokXL27Fllw4YNio+PjzJz5kzDOQDFw8NDWbx4sXL69Gll8uTJio2NjXL58mVFURRlwoQJSocOHZR9+/YpcXFxSkREhLJq1aoK479w4YJiamqqfPzxx0pcXJxy5MgR5YsvvlCys7MVRVGUn3/+WXFzc1OWL1+uxMbGKsuXL1ccHR2VRYsWKYqiKEVFRUpgYKAyduxY5ciRI8rx48eVxx57TAkICFAKCwsVRVGU0aNHK3Z2dsr48eOVEydOKH/99ZdiZWWlfPPNNzX7ZghhJJKohajH/p2o3d3dlXfffbdcma5duyrPP/+8oihlifqff/5R+vTpo/Ts2VPJyMgwlO3Tp4/y3nvvldv/p59+Utzc3AzLgPLGG28YlnNychRA+fvvvxVFUZTBgwcrTz31VKXiP3DggAIo8fHxFW5v0aKFsnjx4nLr3nnnHSU0NNQQW0BAgKLT6QzbCwsLFUtLS2X9+vWKougTtbe3t1JSUmIo8/DDDysjRoyoVIxC1HfyjFqIBiIrK4sLFy4QFhZWbn1YWBiHDx8ut27kyJF4eHiwefNmLC0tDesPHz7Mzp07effddw3rtFotBQUF5OXlYWVlBUD79u0N262trbGzsyMtLQ2A5557jgcffJCDBw/Sr18/hg4dSo8ePSqMOTg4mD59+hAUFER4eDj9+vXjoYceokmTJuTm5nL27Fmefvppxo0bZ9inpKQEe3t7Q7xnzpzB1ta23HELCgo4e/asYblt27aYmJgYlt3c3IiOjr7Jb1OIhkMStRCN0MCBA/n555+JjIzk3nvvNazPyclh1qxZDB8+/Lp9LCwsDD+bmZmV26ZSqdDpdAAMGDCAhIQE1q5dS0REBH369GHChAnMnTv3umOamJgQERHBrl272LBhA59//jmvv/46e/bsMXwp+PbbbwkJCbluv9J4O3fuzC+//HLdsZs1a1apeIVo6CRRC9FA2NnZ4e7uzs6dO7nnnnsM63fu3Em3bt3KlX3uuedo164dDzzwAGvWrDGU79SpEzExMbRs2fK2YmnWrBmjR49m9OjR3HXXXbzyyisVJmrQJ82wsDDCwsKYMWMG3t7erFixgqlTp+Lu7k5sbCyjRo2qcN9OnTqxdOlSnJ2dsbOzu62YhWioJFEL0YC88sorvPXWW7Ro0YIOHTqwcOFCoqKiKqxxTpo0Ca1Wy/3338/ff/9Nz549mTFjBvfffz9eXl489NBDqNVqDh8+zNGjR/m///u/SsUwY8YMOnfuTNu2bSksLGT16tUEBgZWWHbPnj1s2rSJfv364ezszJ49e7h48aKh/KxZs5g8eTL29vb079+fwsJC9u/fz5UrV5g6dSqjRo1izpw5DBkyhLfffhsPDw8SEhL4448/ePXVV/Hw8Kj+L1OIBkIStRANyOTJk8nMzOSll14iLS2NNm3asGrVKvz9/SssP2XKFHQ6HQMHDmTdunWEh4ezevVq3n77bT744APMzMxo3bo1zzzzTKVjMDc3Z/r06cTHx2Npacldd93FkiVLKixrZ2fH9u3bmTdvHllZWXh7e/PRRx8xYMAAAJ555hmsrKyYM2cOr7zyCtbW1gQFBTFlyhQArKys2L59O9OmTWP48OFkZ2fTvHlz+vTpIzVsccdQKYqiGDsIIYQQQlRMBjwRQggh6jFJ1EIIIUQ9JolaCCGEqMckUQshhBD1mCRqIYQQoh6TRC2EEELUY5Koq+iLL77Ax8cHCwsLQkJC2Lt3b52ef/v27QwePBh3d3dUKhUrV64st11RFGbMmIGbmxuWlpb07duX06dPlyuTnp7OqFGjsLOzw8HBgaeffpqcnJxyZY4cOcJdd92FhYUFnp6efPjhh9fFsmzZMlq3bo2FhQVBQUGsXbu2Stcye/Zsunbtiq2tLc7OzgwdOrTcvMugH9N5woQJODk5YWNjw4MPPkhqamq5MomJiQwaNAgrKyucnZ155ZVXyk3bCLB161Y6deqERqOhZcuWLFq06Lp4bve9/eqrr2jfvj12dnbY2dkRGhrK33//3SCv5d/ef/99VCqVoX9zQ7uemTNnolKpyr1at27dIK8F4Pz58zz++OM4OTlhaWlJUFAQ+/fvN2xvSJ8DPj4+1703KpWKCRMmAA3vvakVxp0TpGFZsmSJYm5urvzwww/KsWPHlHHjxikODg5KampqncWwdu1a5fXXX1f++OMPBVBWrFhRbvv777+v2NvbKytXrlQOHz6sPPDAA4qvr6+Sn59vKNO/f38lODhY2b17t/LPP/8oLVu2VEaOHGnYnpmZqbi4uCijRo1Sjh49qvz666+KpaWl8vXXXxvK7Ny5UzExMVE+/PBD5fjx48obb7yhmJmZKdHR0ZW+lvDwcGXhwoXK0aNHlaioKGXgwIGKl5eXkpOTYygzfvx4xdPTU9m0aZOyf/9+pXv37kqPHj0M20tKSpR27dopffv2VQ4dOqSsXbtWadq0qTJ9+nRDmdjYWMXKykqZOnWqcvz4ceXzzz9XTExMlHXr1hnK1MR7u2rVKmXNmjXKqVOnlJiYGOW///2vYmZmphw9erTBXcu19u7dq/j4+Cjt27dXXnjhBcP6hnQ9b731ltK2bVslOTnZ8Lp48WKDvJb09HTF29tbGTNmjLJnzx4lNjZWWb9+vXLmzBlDmYb0OZCWllbufYmIiFAAZcuWLYqiNKz3prZIoq6Cbt26KRMmTDAsa7Vaxd3dXZk9e7ZR4vl3otbpdIqrq6syZ84cw7qMjAxFo9Eov/76q6IoinL8+HEFUPbt22co8/fffysqlUo5f/68oiiK8uWXXypNmjQxzPerKIoybdo0JSAgwLD8yCOPKIMGDSoXT0hIiPKf//yn2teTlpamAMq2bdsMsZuZmSnLli0zlDlx4oQCKJGRkYqi6L+4qNVqJSUlxVDmq6++Uuzs7Azxv/rqq0rbtm3LnWvEiBFKeHi4Ybm23tsmTZoo3333XYO9luzsbMXf31+JiIhQ7rnnHkOibmjX89ZbbynBwcEVbmto1zJt2jSlZ8+eN9ze0D8HXnjhBaVFixaKTqdrcO9NbZFb35VUVFTEgQMH6Nu3r2GdWq2mb9++REZGGjGyMnFxcaSkpJSL0d7enpCQEEOMkZGRODg40KVLF0OZvn37olar2bNnj6HM3Xffjbm5uaFMeHg4MTExXLlyxVDm2vOUlrmd30VmZiYAjo6OABw4cIDi4uJy52ndujVeXl7lricoKAgXF5dycWRlZXHs2LFKxVob761Wq2XJkiXk5uYSGhraYK9lwoQJDBo06LpzNsTrOX36NO7u7vj5+TFq1CgSExMb5LWsWrWKLl268PDDD+Ps7EzHjh359ttvDdsb8udAUVERP//8M2PHjkWlUjW496a2SKKupEuXLqHVasv9MQC4uLiQkpJipKjKK43jZjGmpKTg7OxcbrupqSmOjo7lylR0jGvPcaMy1f1d6HQ6pkyZQlhYGO3atTOcw9zcHAcHh5teT3VjzcrKIj8/v0bf2+joaGxsbNBoNIwfP54VK1bQpk2bBnktS5Ys4eDBg8yePfu6bQ3tekJCQli0aBHr1q3jq6++Ii4ujrvuuovs7OwGdy2xsbF89dVX+Pv7s379ep577jkmT57Mjz/+WC6ehvg5sHLlSjIyMhgzZozh+A3pvaktMimHqBcmTJjA0aNH2bFjh7FDuS0BAQFERUWRmZnJ77//zujRo9m2bZuxw6qypKQkXnjhBSIiIsrNU91QlU4CAtC+fXtCQkLw9vbmt99+w9LS0oiRVZ1Op6NLly689957AHTs2JGjR4+yYMECRo8ebeTobs/333/PgAEDcHd3N3Yo9YrUqCupadOmmJiYXNfaMDU1FVdXVyNFVV5pHDeL0dXVlbS0tHLbS0pKSE9PL1emomNce44blanO72LixImsXr2aLVu2lJu20NXVlaKiIjIyMm56PdWN1c7ODktLyxp9b83NzWnZsiWdO3dm9uzZBAcH8+mnnza4azlw4ABpaWl06tQJU1NTTE1N2bZtG5999hmmpqa4uLg0qOv5NwcHB1q1asWZM2ca3Hvj5uZGmzZtyq0LDAw03MpvqJ8DCQkJbNy4sdxMbg3tvaktkqgrydzcnM6dO7Np0ybDOp1Ox6ZNmwgNDTViZGV8fX1xdXUtF2NWVhZ79uwxxBgaGkpGRgYHDhwwlNm8eTM6nY6QkBBDme3bt1NcXGwoExERQUBAAE2aNDGUufY8pWWq8rtQFIWJEyeyYsUKNm/ejK+vb7ntnTt3xszMrNx5YmJiSExMLHc90dHR5T50IiIisLOzM3yY3SrW2nxvdTodhYWFDe5a+vTpQ3R0NFFRUYZXly5dGDVqlOHnhnQ9/5aTk8PZs2dxc3NrcO9NWFjYdd0YT506hbe3N9DwPgdKLVy4EGdnZwYNGmRY19Dem1pj7NZsDcmSJUsUjUajLFq0SDl+/Ljy7LPPKg4ODuVaG9a27Oxs5dChQ8qhQ4cUQPn444+VQ4cOKQkJCYqi6LtlODg4KH/++ady5MgRZciQIRV2y+jYsaOyZ88eZceOHYq/v3+5bhkZGRmKi4uL8sQTTyhHjx5VlixZolhZWV3XLcPU1FSZO3eucuLECeWtt96qcreM5557TrG3t1e2bt1arntGXl6eocz48eMVLy8vZfPmzcr+/fuV0NBQJTQ01LC9tGtGv379lKioKGXdunVKs2bNKuya8corrygnTpxQvvjiiwq7Ztzue/vaa68p27ZtU+Li4pQjR44or732mqJSqZQNGzY0uGupyLWtvhva9bz00kvK1q1blbi4OGXnzp1K3759laZNmyppaWkN7lr27t2rmJqaKu+++65y+vRp5ZdfflGsrKyUn3/+2VCmIX0OKIq+hbWXl5cybdq067Y1pPemtkiirqLPP/9c8fLyUszNzZVu3bopu3fvrtPzb9myRQGue40ePVpRFH3XjDfffFNxcXFRNBqN0qdPHyUmJqbcMS5fvqyMHDlSsbGxUezs7JSnnnpKyc7OLlfm8OHDSs+ePRWNRqM0b95cef/996+L5bffflNatWqlmJubK23btlXWrFlTpWup6DoAZeHChYYy+fn5yvPPP680adJEsbKyUoYNG6YkJyeXO058fLwyYMAAxdLSUmnatKny0ksvKcXFxdf93jp06KCYm5srfn5+5c5R6nbf27Fjxyre3t6Kubm50qxZM6VPnz6GJN3QrqUi/07UDel6RowYobi5uSnm5uZK8+bNlREjRpTrd9yQrkVRFOWvv/5S2rVrp2g0GqV169bKN998U257Q/ocUBRFWb9+vQJcF6OiNLz3pjaoFEVRjFKVF0IIIcQtyTNqIYQQoh6TRC2EEELUY5KohRBCiHpMErUQQghRj0miFkIIIeoxSdRCCCFEPSaJuooKCwuZOXMmhYWFxg6lRjSm62lM1wKN63oa07VA47qexnQt0PiuB0D6UVdRVlYW9vb2ZGZmYmdnZ+xwbltjup7GdC3QuK6nMV0LNK7raUzXAo3vekBq1EIIIUS9JolaCCGEqMfuuPmoS0pKOHToEC4uLqjVVf+ekp2dDcD58+fJysqq6fDqXGO6nsZ0LdC4rqcxXQs0rutpTNcCDed6dDodqampdOzYEVPTm6fiO+4Z9b59++jWrZuxwxBCCCHYu3cvXbt2vWkZo9aoZ8+ezR9//MHJkyextLSkR48efPDBBwQEBNxwn0WLFvHUU0+VW6fRaCgoKKjUOV1cXAD9L8fNza36wQshhBDVlJycTLdu3Qw56WaMmqi3bdvGhAkT6Nq1KyUlJfz3v/+lX79+HD9+HGtr6xvuZ2dnV27idJVKVelzlt7udnNzw8PDo/rBCyGEELepMo9gjZqo161bV2550aJFODs7c+DAAe6+++4b7qdSqXB1da3t8IQQQgijq1etvjMzMwFwdHS8abmcnBy8vb3x9PRkyJAhHDt2rC7CE0IIIepcvUnUOp2OKVOmEBYWRrt27W5YLiAggB9++IE///yTn3/+GZ1OR48ePTh37lyF5QsLC8nKyjK8SlsECiGEEA1BvemeNWHCBI4ePcqOHTtuWi40NJTQ0FDDco8ePQgMDOTrr7/mnXfeua787NmzmTVrVpXj0Wq1FBcXV3k/Ia5lZmaGiYmJscMQQjRg9SJRT5w4kdWrV7N9+/YqN/AyMzOjY8eOnDlzpsLt06dPZ+rUqYbl8+fP06ZNmxseT1EUUlJSyMjIqFIcQtyIg4MDrq6uVWr0KIQoL+5SLnYWpjjZaIwdSp0zaqJWFIVJkyaxYsUKtm7diq+vb5WPodVqiY6OZuDAgRVu12g0aDRlb+ytOsCXJmlnZ2esrKzkw1VUm6Io5OXlkZaWBiDdAYWopsizl3n8+z14NLFkw4t3ozG9s+5SGTVRT5gwgcWLF/Pnn39ia2tLSkoKAPb29lhaWgLw5JNP0rx5c2bPng3A22+/Tffu3WnZsiUZGRnMmTOHhIQEnnnmmduOR6vVGpK0k5PTbR9PiNK/47S0NJydneU2uBBVlJFXxItLo9DqFBIu5/H7gXOMCvE2dlh1yqiNyb766isyMzPp1asXbm5uhtfSpUsNZRITE0lOTjYsX7lyhXHjxhEYGMjAgQPJyspi165dN72dXVmlz6StrKxu+1hClCr9e5I2D0JUjaIoTFt+hJSsAsxN9enqyy1nKSrRGTmyumX0W9+3snXr1nLLn3zyCZ988kktRaQnt7tFTZK/JyGqZ/HeRNYfS8XMRMWv40IY//NBzmfk88fBczzazcvY4dWZetM9SwghhCh1OjWbd1YfB2Ba/9Z09nbkP3f7ATB/yxmKtXdOrVoStbghHx8f5s2bV+nyW7duRaVS1XqL+UWLFuHg4FCr5xBCGE9BsZZJvx6ioFjH3a2aMTZM39B4VIg3TW00nLuSz4qD540cZd2RRN0IqFSqm75mzpxZrePu27ePZ599ttLle/ToQXJyMvb29tU6nxBCALz/90lOpmTT1Macjx4ORq3WPz6yNDe5I2vV9aIftbg91za2W7p0KTNmzCg3aYmNjY3hZ0VR0Gq1t5z/FKBZs2ZVisPc3FzGYBdC3JbNJ1NZtCsegLkPB9PMtny/6VHdvViw7SyJ6XmsPHSeh7t4GiHKuiU16kbA1dXV8LK3tzdMWuLq6srJkyextbXl77//pnPnzmg0Gnbs2MHZs2cZMmQILi4u2NjY0LVrVzZu3FjuuP++9a1Sqfjuu+8YNmwYVlZW+Pv7s2rVKsP2f9/6Lr1FvX79egIDA7GxsaF///7lvliUlJQwefJkHBwccHJyYtq0aYwePZqhQ4dW6Xfw1Vdf0aJFC8zNzQkICOCnn34ybFMUhZkzZ+Ll5YVGo8Hd3Z3Jkycbtn/55Zf4+/tjYWGBi4sLDz30UJXOLcSNrI1OZsG2s+h0t244KyAtq4CXlx0BYGyYL70CnK8rY2VuyrhratUld0CtWhL1LSiKQl5RiVFelWkVX1mvvfYa77//PidOnKB9+/bk5OQwcOBANm3axKFDh+jfvz+DBw8mMTHxpseZNWsWjzzyCEeOHGHgwIGMGjWK9PT0G5bPy8tj7ty5/PTTT2zfvp3ExERefvllw/YPPviAX375hYULF7Jz506ysrJYuXJlla5txYoVvPDCC7z00kscPXqU//znPzz11FNs2bIFgOXLl/PJJ5/w9ddfc/r0aVauXElQUBAA+/fvZ/Lkybz99tvExMSwbt26m87cJkRlpecWMWVJFO//fZIVh+6c56nVpdMpTP3tMOm5RbRxs2PagIAbln2iuzeO1uYkXM7jz6gLdRilccit71vIL9bSZsZ6o5z7+NvhWJnXzFv09ttvc9999xmWHR0dCQ4ONiy/8847rFixglWrVjFx4sQbHmfMmDGMHDkSgPfee4/PPvuMvXv30r9//wrLFxcXs2DBAlq0aAHoh4t9++23Dds///xzpk+fzrBhwwCYP38+a9eurdK1zZ07lzFjxvD8888DMHXqVHbv3s3cuXPp3bs3iYmJuLq60rdvX8zMzPDy8qJbt26Avp++tbU1999/P7a2tnh7e9OxY8cqnV+Iivxx8BxFV2t7c9bHMDDIDUtzGfDmRr79J5YdZy5haWbCZyM73nT0MWuNKc/c5cuH62KYv+UMQzq4Y2rSeOudjffKRDldunQpt5yTk8PLL79MYGAgDg4O2NjYcOLEiVvWqNu3b2/42draGjs7O8MQmRWxsrIyJGnQD6NZWj4zM5PU1FRD0gQwMTGhc+fOVbq2EydOEBYWVm5dWFgYJ06cAODhhx8mPz8fPz8/xo0bx4oVKygpKQHgvvvuw9vbGz8/P5544gl++eUX8vLyqnR+If5NURQW79X/XzJRq0jJKuDbf2KNHFX9deRcBnPW69vVvDW4DS2dbW6xBzwZ6oODlRlxl3L560jjrlVLjfoWLM1MOP52uNHOXVOsra3LLb/88stEREQwd+5cWrZsiaWlJQ899BBFRUU3PY6ZmVm5ZZVKhU5342dEFZWvyVv6leHp6UlMTAwbN24kIiKC559/njlz5rBt2zZsbW05ePAgW7duZcOGDcyYMYOZM2eyb98+6QImqm1vXDqxF3OxNjfhzfvb8Nof0SzYdpZHu3ribGdh7PDqldzCEl5YEkWJTmFgkCsjulaucZiNxpRxd/kxZ30Mn28+wwPBzTFRN87BhaRGfQsqlQorc1OjvGpzRKudO3cyZswYhg0bRlBQEK6ursTHx9fa+Spib2+Pi4sL+/btM6zTarUcPHiwSscJDAxk586d5dbt3Lmz3LCylpaWDB48mM8++4ytW7cSGRlJdHQ0AKampvTt25cPP/yQI0eOEB8fz+bNm2/jysSdrrQ2/UCH5ozo6klHLwfyirR8tOGUkSOrf95adYy4S7m421swe1j7Kn3uPRnqjb2lGbEXc1ndiGvVUqO+Q/n7+/PHH38wePBgVCoVb7755k1rxrVl0qRJzJ49m5YtW9K6dWs+//xzrly5UqX/rK+88gqPPPIIHTt2pG/fvvz111/88ccfhlbsixYtQqvVEhISgpWVFT///DOWlpZ4e3uzevVqYmNjufvuu2nSpAlr165Fp9MREHDjhixC3MyV3CL+jtZPMPRYNy9UKhVvDGrDg1/t4rcDSYzu4UMbdzsjR1k/rDp8gd8PnEOtgnmPdsTeyuzWO13D1sKMZ3r68lHEKT7bdJr727s3ylq11KjvUB9//DFNmjShR48eDB48mPDwcDp16lTncUybNo2RI0fy5JNPEhoaio2NDeHh4VhYVP724NChQ/n000+ZO3cubdu25euvv2bhwoX06tUL0M8H/e233xIWFkb79u3ZuHEjf/31F05OTjg4OPDHH39w7733EhgYyIIFC/j1119p27ZtLV2xaOyWX21E1q65HUEe+sF/Ons3YVB7NxQF3lt7os4f/9RHSel5vP6H/q7WxHv96ebrWK3jjA7zwc7ClLMXc1kTnXzrHRoglXKH/cWcO3cOT09PkpKS8PDwKLetoKCAuLg4fH19q5QoRM3R6XQEBgbyyCOP8M477xg7nBohf1d3DkVR6PvxNs5ezOXdYe3KTceYlJ5Hn4+2UaTVsXBMV3q3vr6P8J2iRKvjka8jOZiYQRfvJix5tvtttdr+dONpPtl4Cn9nG9ZPudswkll9drNc9G9SoxZGlZCQwLfffsupU6eIjo7mueeeIy4ujscee8zYoQlRZfvir3D2Yi5W5iY8EOxebpunoxVPhfkA8O7aE3fEQB038tmm0xxMzMDWwpR5j3a47a5VY8J8sLUw5XRaDn8fTamhKOsPSdTCqNRqNYsWLaJr166EhYURHR3Nxo0bCQwMNHZoQlTZr6WNyILdsbW4/nnr871b4mhtzpm0HH7dl1TX4dULe2IvM3/LGQDeGxaERxOr2z6mvaWZYeKOzzadbnQjwUmiFkbl6enJzp07yczMJCsri127dsnIYKJBysgrMjwjHXmDuZLtLc2Y0tcfgE8iTpFVUFxn8dUHGXlFTFkahU6Bhzt7MPhfdx1ux9gwX2w1psSkZrP+WOOqVUuiFkKIGvDHwfMUleho42ZHe48bzyA3spsXLZpZk55bxJdbztZhhMalKAqvLY8mObMAv6bWzHygZhts2luZGR4tfNrIatWSqIUQ4jYpimK47T0yxOum3QvNTNT8d6D+0c4PO+JISr8zRsJbsi+JdcdSMDNR8dnIjlhrar538NievthoTDmZks2G46k1fnxjkUQthBC36UDCFU6n5WBpZsKQDre+nXtva2fCWjpRpNXx4fqYW5Zv6M6kZTPrr2MAvBremnbNa2fOegcrc8b08AH0z6obS6cmSdRCCHGbSkciGxzshl0Fjcj+TaVS8frANqhU8NfhCxxMvFLbIRrN6iMXeGhBJAXFOu7yb8rTPX1r9XxP9/TF2tyE48lZRDSSWrUkaiGEuA2ZecWsOXLzRmQVaeNux8Od9f1n/2/18UZT+yuVmVfMC0sOMXHxITLyimnX3I6PH+lQ632cm1ibM/pqrfrTRlKrlkQthBC3YcWhcxSW6GjtaksHT4cq7ftSvwAszUw4mJjRqEbV2nH6EuHztvNn1AVM1Com39uSFc+H0cxWUyfnf+YuP6zMTTh2IYtNJ248u19DIYlaGPTq1YspU6YYln18fJg3b95N91GpVKxcufK2z11Tx7mZmTNn0qFDh1o9h7iz6BuR6ftDj7pFI7KKuNhZMP4e/TSwH6w7SUGxtsZjrEv5RVpmrjrG49/vISWrAN+m1vw+PpSp/QIwq8P5oh2tzXky1AdoHLVqSdSNwODBg+nfv3+F2/755x9UKhVHjhyp8nH37dvHs88+e7vhlXOjZJmcnMyAAQNq9FxC1LaDiRnEpGZjYaZmSMfm1TrGuLt9cbHTkJSez4+74ms2wDp0OCmDQZ//w6Kr1/BEd2/WTO5JR68mRoln3F2+WJqZEH0+ky0xDbtWbdREPXv2bLp27YqtrS3Ozs4MHTqUmJhbt4BctmwZrVu3xsLCgqCgINauXVsH0dZfTz/9NBEREZw7d+66bQsXLqRLly60b9++ysdt1qwZVla3P2pQZbi6uqLR1M1tMSFqSmmXrMHt3SvViKwiVuamvBLeGoD5m89wOaewxuKrC8VaHZ9EnGL4V7uIvZiLi52GH8d2452h7bAyN94EjU42Gp4I1Y+1/ummMw26Vm3URL1t2zYmTJjA7t27iYiIoLi4mH79+pGbm3vDfXbt2sXIkSN5+umnOXToEEOHDmXo0KEcPXq0DiOvX+6//36aNWvGokWLyq3Pyclh2bJlPP3001y+fJmRI0fSvHlzrKysCAoK4tdff73pcf996/v06dPcfffdWFhY0KZNGyIiIq7bZ9q0abRq1QorKyv8/Px48803KS7Wj760aNEiZs2axeHDh1GpVKhUKkPM/771HR0dzb333oulpSVOTk48++yz5OTkGLaPGTOGoUOHMnfuXNzc3HBycmLChAmGc1WGTqfj7bffxsPDA41GQ4cOHVi3bp1he1FRERMnTsTNzQ0LCwu8vb2ZPXs2oL/lOXPmTLy8vNBoNLi7uzN58uRKn1s0fJn5xYY5kEeGVL4RWUWGd2xOW3c7sgtL+HTT6ZoIr06cScvhwa928emm02h1CoOD3Vk/5W7uadXM2KEBMO4uPyzM1BxOymDbqYvGDqfajDof9bUfiqD/IHd2dubAgQM3HEby008/pX///rzyyisAvPPOO0RERDB//nwWLFhQe8EW3fjLww2ZaMDk6q9YWwLaQlCpwczy1sc1t670aUxNTXnyySdZtGgRr7/+uuE52bJly9BqtYwcOZKcnBw6d+7MtGnTsLOzY82aNTzxxBO0aNGCbt263fIcOp2O4cOH4+Liwp49e8jMzCz3PLuUra0tixYtwt3dnejoaMaNG4etrS2vvvoqI0aM4OjRo6xbt84wV7S9/fX9KXNzcwkPDyc0NJR9+/aRlpbGM888w8SJE8t9GdmyZQtubm5s2bKFM2fOMGLECDp06MC4ceMq9Xv79NNP+eijj/j666/p2LEjP/zwAw888ADHjh3D39+fzz77jFWrVvHbb7/h5eVFUlISSUn655HLly/nk08+YcmSJbRt25aUlBQOHz5cqfOKxuHPqPMUFOsbkXWsYiOyf1OrVbw+KJDHvt3DL3sSeTLUm5bOtjUTaC3Q6RR+jIzn/b9PUliiw87ClP8bFnTdRCTG1sxWw+Mh3ny3I45PN53mnlbNqtyOoD4waqL+t8zMTAAcHW88L2lkZCRTp04tty48PPyGDZEKCwspLCy7lZSdnV294N6rxh/gw4ug7TD9zyf/gmVjwLsnPLWmrMy8IMi7fP2+MzOrdKqxY8cyZ84ctm3bZpiHeeHChTz44IPY29tjb2/Pyy+/bCg/adIk1q9fz2+//VapRL1x40ZOnjzJ+vXrcXfX/y7ee++9654rv/HGG4affXx8ePnll1myZAmvvvoqlpaW2NjYYGpqiqur6w3PtXjxYgoKCvjf//6HtbX+C8v8+fMZPHgwH3zwAS4uLgA0adKE+fPnY2JiQuvWrRk0aBCbNm2qdKKeO3cu06ZN49FHHwXggw8+YMuWLcybN48vvviCxMRE/P396dmzJyqVCm/vsikLExMTcXV1pW/fvpiZmeHl5VWp36NoHBRFYfGeqyORdat6I7KK9GjRlPvauBBxPJXZa0/y/Ziut33M2nAhI59Xfz/CjjOXALjLvylzHgrG1b5+TuH67D1+/LQ7gUOJGfxz+hJ315PaflXUm8ZkOp2OKVOmEBYWRrt27W5YLiUlxfBBXcrFxYWUlIoHYZ89e7YhUdnb29OmTZsajbu+aN26NT169OCHH34A4MyZM/zzzz88/fTTAGi1Wt555x2CgoJwdHTExsaG9evXk5iYWKnjnzhxAk9PT0OSBggNDb2u3NKlSwkLC8PV1RUbGxveeOONSp/j2nMFBwcbkjRAWFgYOp2uXBuGtm3bYmJiYlh2c3MjLa1yjUaysrK4cOECYWFh5daHhYVx4sQJQH97PSoqioCAACZPnsyGDRsM5R5++GHy8/Px8/Nj3LhxrFixgpKSkipdp2i4opIyOJmSjcZUzdBqNiKryPQBrTFVq9h0Mo2dVxNhfaEoCisPnSd83nZ2nLmEhZmad4a05X9ju9XbJA3gbGthmBe8obYArzc16gkTJnD06FF27NhRo8edPn16uRr4+fPnq5es/3uh6vuYXNM4qvVg/TFU//puNCW66se9gaeffppJkybxxRdfsHDhQlq0aME999wDwJw5c/j000+ZN28eQUFBWFtbM2XKFIqKimrs/JGRkYwaNYpZs2YRHh6Ovb09S5Ys4aOPPqqxc1zLzKx84x2VSoVOV3Nz/Hbq1Im4uDj+/vtvNm7cyCOPPELfvn35/fff8fT0JCYmho0bNxIREcHzzz9vuKPx77hE41Nam76/vTv2ljX3fvs1s+Hx7t4s2hXP/605wepJPTGp5QFCKuNKbhFvrDxq6Osd7OnAJ48E49fMxsiRVc74e/z4ZU8CBxKusP30pXrzDL2y6kWNeuLEiaxevZotW7bg4eFx07Kurq6kppYfFi41NfWGt1I1Gg12dnaGl61tNZ/7mFtX/WVyzfcgE1P9umufT9/suNXwyCOPoFarWbx4Mf/73/8YO3as4Zbczp07GTJkCI8//jjBwcH4+flx6tSpSh87MDCQpKQkkpPLBmXYvXt3uTK7du3C29ub119/nS5duuDv709CQkL5yzU3R6u9eV/RwMBADh8+XK5R4c6dO1Gr1QQEBFQ65puxs7PD3d2dnTt3llu/c+fOcl/k7OzsGDFiBN9++y1Lly5l+fLlpKenA2BpacngwYP57LPP2Lp1K5GRkURH19wXL1E/ZRUU89fVRmSPhXjW+PFf6OOPnYUpJ5KzWH7g+p4cde1yTiEDP/uHNdHJmKpVTL2vFcvHhzaYJA3gbGfB4931teoZfx4lv6hh9Vc3aqJWFIWJEyeyYsUKNm/ejK/vrceADQ0NZdOmTeXWRUREVHgb9k5jY2PDiBEjmD59OsnJyYwZM8awzd/fn4iICHbt2sWJEyf4z3/+c90Xnpvp27cvrVq1YvTo0Rw+fJh//vmH119/vVwZf39/EhMTWbJkCWfPnuWzzz5jxYoV5cr4+PgQFxdHVFQUly5dKtd+oNSoUaOwsLBg9OjRHD16lC1btjBp0iSeeOKJ6x573I5XXnmFDz74gKVLlxITE8Nrr71GVFQUL7zwAgAff/wxv/76KydPnuTUqVMsW7YMV1dXHBwcWLRoEd9//z1Hjx4lNjaWn3/+GUtLy3LPsUXj9OchfSOyVi42dKqFPsJNrM2Z3Ec/Z/XcDTHkFhr3kcq6YykkZxbQ3MGSP57vweQ+/pjW4eAlNWVKX39c7SxIuJzHvI2Vr6TUB0b9bU+YMIGff/6ZxYsXY2trS0pKCikpKeTn5xvKPPnkk0yfPt2w/MILL7Bu3To++ugjTp48ycyZM9m/fz8TJ040xiXUO08//TRXrlwhPDy83PPkN954g06dOhEeHk6vXr1wdXVl6NChlT6uWq1mxYoV5Ofn061bN5555hnefffdcmUeeOABXnzxRSZOnEiHDh3YtWsXb775ZrkyDz74IP3796d37940a9aswi5iVlZWrF+/nvT0dLp27cpDDz1Enz59mD9/ftV+GbcwefJkpk6dyksvvURQUBDr1q1j1apV+PvrPyRtbW358MMP6dKlC127diU+Pp61a9eiVqtxcHDg22+/JSwsjPbt27Nx40b++usvnJycajRGUb8oisIvNdyIrCJPhHrj5WhFWnYhX2+PrZVzVNa+OP0dpAc7e9Dew8GosdwOWwsz/m+ovv3Tt//EcuRchnEDqgKVYsQn6zf6I1+4cKGhNtirVy98fHzKdctZtmwZb7zxBvHx8fj7+/Phhx8ycODASp3z3LlzeHp6kpSUdN1t9oKCAuLi4vD19cXCov42jhANi/xdNR5RSRkM/WInGlM1e/7bBwcr81o719/RyTz3y0EszNRsfbm30Rpshb2/mfMZ+fz8dAg9/ZsaJYaaNOnXQ/x1+AKtXW35a1LPOh3a9Fo3y0X/ZtTGZJX5jrB169br1j388MM8/PDDtRCREELc2K9Xa9ODgtxqNUkD9G/nSlefJuyLv8Lnm0/z7rCgWj1fRc5dyeN8Rj4mahUdvRzq/Py14a3Bbfjn9EVOpmTzzfZYJvRuaeyQbqnhPWgQQggjyC4oZtXhmhmJrDJUKhUT79U/htl8Ms0o3Yr2xetve7drbo+1pt50ErotTW00vDVY32D0002nOXsx5xZ7GJ8kaiGEqIQ/oy6QX6ylpbMNXbzrZqKJbj6OmJmoSM4sIOFyXp2c81p7465cjcM4E2vUlqEdmtMroBlFJTpeW34Ena5+962WRC2EELdQGyORVYaluQkdPfVJMjK2ghEMa9neOP05u/rceLTIhkilUvF/Q9thZW7Cvvgr/LIn4dY7GZEkaiGEuIXo85kcT87C3FTN8BociawyurfQ9ySIPFu3ifpSTiFnL+rHMmhsiRrAo4kVr4brx2V4/++TXMjIv8UexiOJugI1ObqVEPL31PCVTmc5sJ0rTaxrtxHZv4X6XU3UsZfr9Dn1/qvPp1u52NT5NdeVJ0J96OzdhNwiLW+sPFpvhxdtHK0Daoi5uTlqtZoLFy7QrFkzzM3NG+RMK6J+UBSFoqIiLl68iFqtxty8cX7YNXY5hSX8GXW1EVm32m9E9m8dvRwwN1VzMVtfw23pXDcjghmeT/s2vtp0KRO1ig8eDGLgpzvYfDKNVYcvMKRD3d4xqQxJ1NdQq9X4+vqSnJzMhQvVGNtbiApYWVnh5eWFWi03sBqiVVEXyCvS4tfM2ihJy8LMhM5eTYiMvUxk7OU6S9SlLb4b423va7V0tmXivS35OOIUs/46zl3+zXCsZ3cQJFH/i7m5OV5eXpSUlNxyTGohbsXExARTU1O5M9OAld72fqwOG5H9W2gLJyJjL7P77GWe6F77w9RmFxRz7IJ+qt3GXKMuNf6eFqw5kkxMajZv/3WMeY92NHZI5UiiroBKpcLMzExmQRLiDhd9LpPo85mYm6gZ3unmo0fVptAWThABu68+p67tLwwHEzPQKeDpaImbveWtd2jgzE3VfPBQe4Z/uZOVURd4oIM797auuXkFbpfcixNCiBv4dZ++Nt2/natRb4cGezhgaWbC5dwiTqXW/gAdpeN7N/bb3tfq4OnA2DD9xFBvrDhKjpEnQ7mWJGohhKhATmEJfx46DxinEdm1zE3VdLk66Ejk2Uu1fr69VxN1tzsoUQNM7dcKT0dLLmQW8OG6k8YOx0BufQsh7lglWh0XMgqIu5xLwuVc4i/lEX85l/jLuSSl51GsVfBtak13P+MnrO5+Tvxz+hKRsZcZE3brKYGrq7BES9TVmaXuhOfT17IyN+X94e0Z9d0eftqdwOBg93pxV0EStRCiUSvW6jh/JV+fjC/lEn85T5+UL+eRlJ5HyU2Gj7QwUzOlr3+9aAwYenXgkz1x6eh0Cmp17cR05FwmRSU6mtqY49vUulbOUZ+FtWzKI108+G3/OaYtP8LayXdhYWZi1JgkUQshGh2dTmHOhhjWRidz7ko+2pskY3NTNd6OVvg0tcbHyQpvJ2t8nKzxaWqFm70lJrWUEKsqqLk91uYmZOQVcyIli7bu9rVynr3XPJ+uD19QjOH1gW3YEnOR2Iu5zN98hpevjmBmLJKohRCNzo+R8Xy19axh2cJMjbejPvn6OFlfTcb65OxqZ1FrtdOaZGaipquvI1tjLrI7Nr3WE/Wddtv7WvZWZrz9QFue++UgC7adZWCQG22cLeDIEuj0ZJ3HI4laCNGonEzJYvbf+oZAL/drxUOdPXG21TSIZHwroX5ObI25SOTZyzzds+afU2t1CgcS9COS1Ydns0ajLWaAfQKve5/g3YRApi0/wornQjGN/ALaPwqmddsDQFp9CyEajYJiLVOWRFFUoqN3QDMm9G6Jq33DqDFXRtlz6ss3vZ1fXSeSs8gpLMFWY0qgm12NH7/e0mmhILNsOfkI/BDO0xmf4WChJvp8Jj/siofuz0NhVp2HJ4laCNFozFkfw8mUbJyszfnwoeBG94y1jZsdthpTsgtKOH6h5hNG6W3vzj5N6s2z+VqhKHAxBvZ8A0tGwYd+sOGNsu1uwdDEF3WL3rx1n37s748jTpHg8xBYN63zcOXWtxCiUdh+6iLf74gD4MOH2tPMVmPkiGqeqYmabr6ObDqZRmTsJYI8avY59d7GOtCJokB6LCTshLjt+ldOavky5w+V/WxiCpMPgUrFUEVh2fE8dp29zGvLo1k8LqTOvwBKohZCNHjpuUW8vOwwAI9396JPYP0Z/rGmhbZw0ifqs5d59u4WNXZcRVEME3GENPSGZNoSSD0KiZFXX7uvT8ymFuDVHXzvBt97wK1D+e1Xk7FKpWL28CDC520nMvYyS/cl8WgdD4AjiVoI0aApisL0P46Qll1Ii2bWvD6wjbFDqlXdr85PvS/+CiVaHaYmNfMEM/ZSLpdzizA3Vdd4Tb3WFeeDrgQ0tvrlI0vhz+fLlzExh+adwecu8LsHPLqCaeXuung7WfPSfQG8u/YE7649Qe/WzrjYWdTwRdyYJGohRIP22/4k1h9LxcxExaePdsTS3LiDU9S2Nm522FuakZlfTPT5TDp6NamR45be9u7g6YDGtAH9Dte/Dnu+hn7vQPfn9Ou8uoPGHrxC9D979QD3jmBW/eT6VJgPfx25gKejFaZ1/PxeErUQosGKu5TLzFXHAXipXwDtmjewmmA1qNUqQnwd2XA8lcjYyzWWqEsn4qiXt71z0iB2m/4Zc+JueOIPsHPXb7NyAl0xpB4rK+/oB9PiQF1zXzhMTdQsHtcdG03dp01J1EKIBqlYq2PKkkPkF2vp7ufIuLv8jB1SnQlt4aRP1Gcv83yvljVyzL3x9aghWWEOJOyC2K36V9qx8tsTdkHQQ/qfOz4O7YaDwzXzdKtUoKr5uwLGSNJg5ES9fft25syZw4EDB0hOTmbFihUMHTr0huW3bt1K7969r1ufnJyMq6trLUYqhKhvPtt0msPnMrGzMOXjRzo07u5E/1Lan3p//BWKSnSYm97ec+oLGfmcu5KPWgWdvGumhl4l2mI4f+BqYt4G5/bqnzlfy7W9vuGXVyj4hJWtt3Gu01CNwaiJOjc3l+DgYMaOHcvw4cMrvV9MTAx2dmWd8Z2dG/8bJYQosy8+nS+2nAHgveFBuDtYGjmiutXK2RZHa3PSc4s4ci6DLrdZCy5t7d2uuX3d1xr/nADHVkLRv+bZbuIDfr30L5+7wdqpbuOqR4yaqAcMGMCAAQOqvJ+zszMODg41H5AQot7LKihmypIodAoM79Sc+9u7GzukOqdWq+ju58ja6BQiz16+7US9py76TxcXwLE/4Nw+GPSxofsTJYX6JG3pqG+N7ddL313Ksfam8mxoGuTIZB06dMDNzY377ruPnTt33rRsYWEhWVlZhld2dnYdRSmEqA1v/XmM8xn5eDpaMuuBtsYOx2hCr3bTioy9fNvH2lcbiVpRIP9K2bJKBWtegv0/QNqJsvU9X4T/bIdXzsLDi6DzGEnS/9KgGpO5ubmxYMECunTpQmFhId999x29evViz549dOrUqcJ9Zs+ezaxZs+o4UiFEbfgz6jwrDp1HrYJ5Izpga2Fm7JCMpvQ59YGEKxSWaKvdpSo9t4jTafrbzl19bvP5tKJAchQcXwUn/tL3XX5+l36bqQY6P6XvIqWxKdvH5c79slVZDSpRBwQEEBBQNi9ojx49OHv2LJ988gk//fRThftMnz6dqVOnGpbPnz9PmzaNe0AEIRqjc1fyeGPlUQAm3utPZ+960DrZiFo0s6GZrYaL2YUcSswwDIRSVaXPp1s62+BkU41hV3U6feOv0uScmVi2zcQcslPA9mpj3/7vVSvGO12DStQV6datGzt27Ljhdo1Gg0ZT9seXlVX3M58IIW6PVqcw9bfDZBeU0MHTgcn31kyXpIZMpVLR3c+Jvw5fIPLs5eon6urMP60thvgd+sR8cnX54TnNrKBlX2gzBPz7gcUdNAtXLWnwiToqKgo3NzdjhyGEqEVfbz/L3rh0rMxNmDeiQ40Nm9nQhZYm6tjLvFjNY5TWqLvd6vm0TgunI/TJOWZN+efPGnsI6A+Bg6FFHzC3qmY0oiJGTdQ5OTmcOXPGsBwXF0dUVBSOjo54eXkxffp0zp8/z//+9z8A5s2bh6+vL23btqWgoIDvvvuOzZs3s2HDBmNdghCilh05l8HHG04BMPOBtvg0tTZyRPVH6XPqqMQMCoq1WJhV7Tl1bmEJR69Ol9m1ohp1cT6YlXZ9U8HqKZCdrF+0coLWgyBwiL5/s6l5Na9C3IpRE/X+/fvLDWBS+ix59OjRLFq0iOTkZBITy553FBUV8dJLL3H+/HmsrKxo3749GzdurHAQFCFEw5dXVMKUJVGU6BQGtHPl4c4exg6pXvFxssLVzoKUrAIOJlyhR8uqzZV8MPEKWp1CcwdLml/bFz3rAvw2Gq7EwUsx+qE41Wp9Y7C8y/qas1eofjpIUeuM+lvu1asXiqLccPuiRYvKLb/66qu8+uqrtRyVEKK++L81J4i9lIuLnYb3hgXV+TzA9Z3+ObUjK6P0t7+rmqj3xaVjRQFjm8ZBdHrZsJzWznDpFBRkQEo0uHfQr+81rUbjF5VTrUSdlJSESqXCw0P/7Xbv3r0sXryYNm3a8Oyzz9ZogEKIO9OGYyks3qO/o/bRwx1oYi23VisS2sJJn6jPVqE/9eWzcHoDffb/xgTNETTnSiDbC9o9qO/vbGKq79Pc1B/s5S6GsVUrUT/22GM8++yzPPHEE6SkpHDffffRtm1bfvnlF1JSUpgxY0ZNxymEuIOkZRfw2h/RAIy7y5ee/lWrKd5JQv30v5vD5zLIKyrByryCj/WSIkjcBac2wOn1cFnfNigYQAXFdl6YBfSHkoKyZ9It5JFifVGtRH306FG6desGwG+//Ua7du3YuXMnGzZsYPz48ZKohRC3Zfbak6TnFhHoZsfL4QG33uEO5umof758PiOf/fFXuLtVM/2Golx9K+3jf+r/LbpmVEa1KVnOXfksyY9Dmm78PuVJ/TNoUS9VK1EXFxcb+iZv3LiRBx54AIDWrVuTnJxcc9EJIe44Z9KyWRl1HoAPHgyq9ohbd4rS/tTLD54jMvayPlFnXYDPO0NxXllB62b6fs3+/aBFb37efZHv4mMI93VBJUm6XqtWom7bti0LFixg0KBBRERE8M477wBw4cIFnJzu3BlOhBC375ONp1EU6NfGhfYeDsYOp/4rzOYRTSQOJtFEnn1Mv87OHew9QVuoH3gkcAi4dyxXa94XFwNAN1/5zK7vqpWoP/jgA4YNG8acOXMYPXo0wcHBAKxatcpwS1wIIarqZEoWa47o78q9eF8rI0dTjylK2exTF08Rcmga7Uw1dD1/HzmFJfqpKsesAeumZeWuodUp7I/XD1hyy4FOhNFVK1H36tWLS5cukZWVRZMmZYO4P/vss1hZyYg0Qojq+SRCP7DJoCA3At1k6Mly8tIhZq1+7mYHL7j/Y/365p3Arxe/JzbDtLCIfXHp9G7tDDbNbniokylZZF9N6IFutnUTv6i2aiXq/Px8FEUxJOmEhARWrFhBYGAg4eHhNRqgEOLOcPR8JuuPpaJSwQt9/Y0dTv2QexlO/qVvEBa3HXQl+vWWTWDAh/puVCoVPPknx38/Qtb+JCJjL+sT9U2Uju/dybuJDMfaAFQrUQ8ZMoThw4czfvx4MjIyCAkJwczMjEuXLvHxxx/z3HPP1XScQohGrrQ2/UCwO61c7uBaXl46nFwDx1ZA7FZQtGXbXNpBm6HQ5oHrRgULbeHE0v1JlepPvdcwvvdtTmsp6kS1EvXBgwf55JNPAPj9999xcXHh0KFDLF++nBkzZkiiFkJUyaHEK2w6mYZaBS/0uQNr0/kZV29rr4Czm8tqzgCuQdB2mL5BWNMbzxpWOu73sQuZZOYXY29Z8VzdiqKwN+7q82lpSNYgVCtR5+XlYWur/8a7YcMGhg8fjlqtpnv37iQkJNRogEKIxu+TjacBGNbRA79mNkaOpo7lpcNHAaAtKlvn0g7aDoW2w8GpRaUO42JngV9Ta2Iv5bI3Lp372rhUWC7+ch6XcgoxN1HT3sO+Bi5A1LZqPZxo2bIlK1euJCkpifXr19OvXz8A0tLSsLOTBiBCiMrbH5/O9lMXMVGrGn9tuigXjiyDbXPK1lk5glswOLeB3q/DhH3w3E64+5VKJ+lS3a/Wqm92+3tvnH5bsKd9lWfbEsZRrRr1jBkzeOyxx3jxxRe59957CQ0NBfS1644dO9ZogEKIxu2jq1NYPtzZAy+nRthr5NquVJnn4I9nQG0G3Z7RNwoDePwPsLj9Sk6onxOL9yQSGXuzRF1621u6ZTUU1UrUDz30ED179iQ5OdnQhxqgT58+DBs2rMaCE0I0brvOXiIy9jJmJiom3nvj568NTu5l/ZjaJ9eAxg6GfaVf3ywAWt8PzoGg05WVr4EkDdDdT1+jPpGcxZXcogonMtkbr0/iXaX/dINR7WkuXV1dcXV15dy5cwB4eHjIYCdCiEpTFMXQ0ntEV088mjTw2vSlM/oGYTFrIWkPKFcTsaklDJoL5tb65Ud/qbUQmtlq8He24XRaDnviLtO/nVu57SmZBSSl56NWQWdvafHdUFTrGbVOp+Ptt9/G3t4eb29vvL29cXBw4J133kF37bdEIYS4gR1nLrEv/grmpmom9m6Az6Z1WkjcDREzYH5XmN8ZIt6ExEh9knZtD/e8Bk+vB7O6+xJSWqveHZt+3bbSbllt3O2wtai4Vbiof6pVo3799df5/vvvef/99wkLCwNgx44dzJw5k4KCAt59990aDVII0bgoimJ4Nj0qxAtXewsjR1RJJYVwZiOcXAun1kHepbJtajPwvQsCBkKr/uDgaZQQQ1s48dPuhAoblJU2JJPb3g1LtRL1jz/+yHfffWeYNQugffv2NG/enOeff14StRDiprbGXCQqKQMLMzXP9apay+Y6V1IEplef9RbnwdInygYhsbAH/3AIGAAt++iXjay0Rh2Tms3lnEKcbDSGbfviZHzvhqhaiTo9PZ3WrVtft75169akp19/u0UIIUopisLHV59NPxnqg7NtPa1NX4iCNS+BSg3PROjXWTaB4Ef1DcRaDwSvUDCpX7eQHa3Nae1qy8mUbHbHpjOovf45dUZeETGp+jmpu0qL7walWs+og4ODmT9//nXr58+fT/v27W87KCFE47XheCrR5zOxMjfhP3f7GTucMnnpcOl02bKNM5w/AOf367eVGvolDHgffO+ud0m6VGmtOjK27Nb8vquzZfk1s6bpNbVsUf9Vq0b94YcfMmjQIDZu3GjoQx0ZGUlSUhJr166t0QCFEI2HTlfW0ntMD59yt2WNojBb/7z56O/6oTt974YnVui32bnDIz+CRzf9oCQNSGgLJxbtii/3nHrf1YZkIVKbbnCqVaO+5557OHXqFMOGDSMjI4OMjAyGDx/OsWPH+Omnn2o6RiFEI/H30RROpmRjozFl3F1Gqk0XF8CJv+C30TDHH1Y8C6c36MfXzr8C2uKysm2GgJ3bjY9VT3X3dUKlgrMXc0nLKgBg79UZs6QhWcNT7X7U7u7u1zUaO3z4MN9//z3ffPPNbQcmhGhctDqFeRv1temxPX0rHIyj9k5eAnHb4OhyfZIuzCrb5tgCgh6Cdg9Bs1Z1F1Mtsrcyo42bHccuZBEZe5n72rhw9HwmIIm6ITLqRKTbt29n8ODBuLu7o1KpWLly5S332bp1K506dUKj0dCyZUsWLVpU63EKIW7f6iMXOJ2Wg52FKU/39K39E+p0+n7Oa16Gj1vDz8Mh6hd9krZ1h9CJ8OxWmHQAev+30STpUqGG/tSXOZSYQYlOwd3eAo8mlkaOTFRVtWvUNSE3N5fg4GDGjh3L8OHDb1k+Li6OQYMGMX78eH755Rc2bdrEM888g5ubG+Hh4XUQsRCiOkq0OuZdnSFr3F1+N5yCsUYV58H/hkJJvn7Z0lE/I1W7h/SttdVGrafUutAWTny3I47Is5dpdrVlfVdfR1Sl446LBsOoiXrAgAEMGDCg0uUXLFiAr68vH330EQCBgYHs2LGDTz75RBK1ELehoFjLpZxCLuUUcSm7EJ2icHerZjU2u9LKqAvEXcrFwcqMp2qjNl2Up7+tfW4fPPCZfp3GBto/oh+kJOgh8OtVb1tp14auvo6oVfppLVcfuaBfJ7e9G6QqJepb1XozMjJuJ5ZbioyMpG/fvuXWhYeHM2XKlFo9rxANUV5RCZeyi7iYU3g1CRdyKbuo7OdrEnN2Ycl1+7vaWTChdwse6eqJxrT6CbtYq+OzTfra9H/uboGNphbqB8X5sPpF0BVDyH/Apa1+fWnSvgPZWZgR1Nyew+cyib2YC0iL74aqSv9j7O1vPuqOvb09Tz755G0FdDMpKSm4uJSfDN3FxYWsrCzy8/OxtLz+2UthYSGFhYWG5ezs7FqLT4jbpdMpnM/IJ6ewhLyiEnILteQVafU/F2nJKyz/b37pekNZ/b9X8orIK9JW6dzmJmqa2pjT1FZDWlYhKVkFvPnnMRZsi2XivS15qLMHZiZVv128/MA5EtPzaGpjzuge3lXe/zqXz0LUYkiPhYcX6tdZO+kTtJUT2LjcfP87SPcWThw+p29E1sTKjJbONkaOSFRHlRL1woULayuOWjN79mxmzZpl7DCEuCVFUZiw+CB/H02psWNqTNU0tdHQ1FZDMxtzmtlq9MuGlz4xN7XRYGdhanh+WVii5bd9SczfcobzGflM/yOaL7eeYdK9/gzv2BzTSibswhItn28+A8D4e1pgZV7N2nRBJhxboU/QSXvK1t/7BjhdHYI0XIYu/rdQPye+3hYLQBcfeT7dUBn1GXVVubq6kpqaWm5damoqdnZ2FdamAaZPn87UqVMNy+fPn6dNmza1GqcQ1bHuaAp/H01BpQInaw3WGhOszE2xMjfBytwEa3NTrDRX/zXXb7uujMYUS3MTmliZ09TGHBuNabU+nDWmJjwR6sPDXTxZvCeRL7eeJSk9n1d/P8KXW84wuY8/Qzo0x0R982P/tv8c5zPycbbV8Hj3KtamdVqI3QJRv8LJ1VCi7w+MSg0t+0LwSLBrXuVru5N09XHEVK2iRKfIbe8GrEEl6tDQ0OtGPouIiDCMjlYRjUaDRlM2+lFWVtYNywphLNkFxcz86xgAk3q3ZGq/ACNHpGdhZsLYnr6M7ObFz7sTWLDtLPGX85j622HmbznDC338ub+9e4UJu6BYyxdXa9MTeresXMM0nQ7O7YXjf8KxlZB9oWxbs0Do8Ji+gZitaw1dYeNmrTGlb6ALW0+l0SdQHgk0VEZN1Dk5OZw5c8awHBcXR1RUFI6Ojnh5eTF9+nTOnz/P//73PwDGjx/P/PnzefXVVxk7diybN2/mt99+Y82aNca6BCFqxEcbTpGaVYiPkxXP925p7HCuY2luwri7/XgsxIv/RSbw9fazxF7M5YUlUXyx5QxT+raif1tX1Nck7MV7EknJKsDN3oIRXSsx5eOW9+DAj5Bzza1/yyYQ9LA+Qbt1ALl1W2WfjuxAfpEWB6s6HGBG1CijJur9+/fTu3dvw3LpLerRo0ezaNEikpOTSUxMNGz39fVlzZo1vPjii3z66ad4eHjw3XffSdcs0aAdTsrgx8h4AP5vaFCNdYmqDdYaU57r1YLHu3vx4654vtkey6nUHJ7/5SCtXW158b5W9GvjQkGxji+3ngVg4r0V1Ka1xRC/A3zvKevPnJOmT9Iae/20kW0e0N/iNpUJJG6HxtTktlrtC+NTKYqiGDuIunTu3Dk8PT1JSkrCw8PD2OGIO1yJVseQL3Zy7EIWQzu4M+/RjsYOqUoy84v5YUccP+yIM3TxatfcjgAXO5YfPIdHE0s2v9QLc9NrGp/pdPBZB8hIgLEbwCtEvz71GGSeB797JDmLRq8quahxD80jRD33Y2QCxy5kYWdhyuuDGl4jR3tLM168rxX/TOvNxN4tsTY34ej5LJYfPAfAi/d4Yn56LaybDqV1ArUaPEPAuln5Z9AubaFVP0nSQvxLg2pMJkRjciEjn483xAAwfWAgzWwbboJysDLn5fAAxvb05cfNR0jc9xfDLQ7Qc9NBKNYPtkGHx8A1SP/zgA/Awh7UcktWiFuRRC2EkcxcdYzcIi1dvJswokslGlvVZ5fPwql1OJ5ax4sJu0BdAkVXt9l76qeL1NiWlW9g8zsLYUySqIUwgojjqWw4noqpWsW7w4LKtZZuELTFkBgJp9bDqXVw+Uz57U1bXW0QNgTcO0lrbSFugyRqIepYbmEJb/15FIBxd/sR4Gp7iz3qoe/7wYWDZctqM/AJg1b9wb9f2WhhQojbJolaiDr2ScQpLmQW4OloyeR7/Y0dzs2VFELkFxC7FUYtK2vo5d0DMhKhVbj+5dcbLOyMGqoQjZUkaiHq0LELmSzcFQ/A20PaYWlezxpTFeZA+llwC9Yvm5jD3m8gO1nf77llH/36XtPhvnca/ZzOQtQHkqiFqCNancJ/VxxFq1MY1N6N3gHOxg5JP572hSiI3Qxnt+onvNDYwitn9UlYpYKeU/U/l7bYBv1cz0KIOiGJWog68sueBA4nZWCrMeWt+43YZ/pKgn6yi7ObIXYbFGSU366x1deg7a9OeBHybJ2HKIQoI4laiDqQmlXAnHX6PtOv9A/A2c6i7k5ekAlx/5Ql5/TY8ts19uB7F7S4F1r0Bke/uotNCHFLkqiFqANvrz5OdmEJwZ4OjAqp4nSPtyM/A+a0BF1x2TqVCXh20zcAa3EvuHcEE/koEKK+kv+dQtSyLTFprDmSjIlaxXvD2t1yDudqu3gKtn8Ixfnw6C/6dZYO4BwIxXllidmnp7TQFqIBkUQtRC3KL9Ly5kp9n+mnevjQ1t2+5g6eeV6fgJte7eKlUkH0Mn2f5qJcMLfWr39qbflRwYQQDYokaiFq0aebTnPuSj7u9ha8eF+r2zuYouhnmIpZCyfXQHIUBD4AI37Sb2/qD33eAu8wMLUs20+StBANmiRqIWpJTEo23/2jb7g1a0g7rDXV+O+mLYHEXXByrT5BZyRcs1EFRTn6BF46ROddU28/cCFEvSKJWohaoNMp/HdFNCU6hX5tXLivjUvldy7MhjOb9In51Pry3adMLcCvFwQM1I+lbVMP+mILIWqVJGoh0CfWBdvPcjgpg64+jnT3c6KNm121J8tYsi+JAwlXsDY3YeYDbSu3U3osrH8DzkSAtqhsvaWjPikHDNR3nyp99iyEuCNIohZ3vPwiLS8vO8ya6GQA1h9LBcDByowQX0dC/Zzo0bIp/s42qCoxC9TF7ELe//sEAFP7BeDuYFlxQZ0O8i6V1Yo19nB6PehK9H2ZAwZC60HgGSLzNgtxB5NELe5oaVkFjPvffg6fy8TMRMXoUB/OXsxhb1w6GXnFrD+WakjcTW3MCfFzokcLJ0L9nPBtal1h4n53zXGyCkpo627H6NAb9JlOiIQ/xoFdc3h6vX6dtRMM/gzcO4BzG5kaUggBSKIWd7DjF7J45sd9XMgswMHKjK8f70yInxMAxVod0ecziTx7md2xl9kXn86lnCLWHElmzRF9zdvFTkOPFk0J9XMitIUTno5W/HP6IiujLqBWwezhQZiaXJ20IjsV8q+Ac2v9chMfyDwHBVmQlw5Wjvr1HUfV8W9BCFHfqRRFUYwdRF06d+4cnp6eJCUl4eHhYexwhJFsPJ7K5CWHyCvS4tfMmh9Gd8Wn6Y2f/RaWaDmcpE/cu85e4lBiBkVaXbkyzR0sKSzRcSmnkDE9fJgZ7q3vRnVkqX74Tt974MmVZTvEbQePrmB2g1vjQohGqyq5SGrU4o6iKArf74jj3bUnUBQIa+nEl491xt7K7Kb7aUxN6ObrSDdfR17o609BsZaDCVfYdfYykbGXOZyUwfmMfEzQ8oBNDK8XroC5a6E4t+wgxfmgLQaTq+fyvbsWr1QI0VhIohb1QnZBMcVaBUdr81o7R7FWx4w/j/Hr3kQAHgvxYtYDbTEzqfqcyhZmJvRo2ZQeLZtC5jkKT50h41gEtud3YFWcDseuFmziC8GPQtDD4NSiBq9GCHGnkEQtjO5wUgZP/7iPK3nFDApy45m7fGnv4VCj58jMK+a5Xw6w6+xlVCp4Y1Abxob5VKoVd4Xyr8DmdyF2K1w+jQYw9JS2dIR2D0L7EeDRRRqFCSFuS9WrErXgiy++wMfHBwsLC0JCQti7d+8Nyy5atAiVSlXuZWFRh1MGihq1JSaNR7/ZzaWcIrQ6hVWHL/DA/J088nUkEcdT0eluvwlF/KVchn25k11nL2NtbsJ3T3bh6Z6+lU/SJYX6aSJj/i5bZ24Dh3+Fy6dBpYbmXeCul2H0angpBgbNBc+ukqSFELfN6DXqpUuXMnXqVBYsWEBISAjz5s0jPDycmJgYnJ0rHnXJzs6OmJgYw3K1a0XCqH7bn8T0P6LR6hTu8m/KC338+WVPIn8dvsDeuHT2xqXj19SasT19ebCTB5bmVe9LvDv2MuN/PkBGXjHu9hZ8P6YrgW63mDlKpwNtYVkjr9MRsHQUOLXUDzwC+ufM980CG1f9bFSWDlWOTQghKsPorb5DQkLo2rUr8+fPB0Cn0+Hp6cmkSZN47bXXriu/aNEipkyZQkZGRrXOJ62+jU9RFD7ffIaPI04BMLxTcz54sL3hWXFyZj6LdsWzeE8i2QUlADSxMuOJ7t48EepDM1tNpc7z2/4kXl8RTbFWIdjTgW+f7Iyz7Q3uvmSn6kcEO7sZYrdBl7Fw7+v6bfkZ8GWoPiEPmQ+mlTu/EELcSINp9V1UVMSBAweYPn26YZ1araZv375ERkbecL+cnBy8vb3R6XR06tSJ9957j7ZtKx6msbCwkMLCQsNydnZ2zV2AqLISrY43r2nQNaF3C17uF1DuroibvSXTBwQy6V5/ftuXxA874zh3JZ/PNp9hwfZYhnVozjN3+eLvUvGsUDqdwofrY1iw7SwAg4Lc+OiRYCzMrqmR67Rw/iCc3qB/JUeVP0jiNX9/lg4w9bjcxhZCGIVRE/WlS5fQarW4uJSfsMDFxYWTJ09WuE9AQAA//PAD7du3JzMzk7lz59KjRw+OHTtW4beS2bNnM2vWrFqJX1RNfpGWSb8eZOOJNFQqePuBtjwR6nPD8jYaU8b29OXJUG82HE/l239iOZSYwdL9SSzdn0SvgGY809OPsJZOhkSfV1TCi0ujDKOJTbq3JS/2baUfszsvXV9jPr0BzmyEvMvlT+jeEVreBy3uheady2+TJC2EMBKj3vq+cOECzZs3Z9euXYSGhhrWv/rqq2zbto09e/bc8hjFxcUEBgYycuRI3nnnneu2/7tGff78edq0aSO3vutYem4RT/+4j0OJGWhM1Xz6aEf6t3Ot8nEOJKTz7fY41h9PofQvN9DNjmd6+tLN15HnfjnA0fNZmJuo+eChIIZ19ID4HbDpHTi3F5RrBinR2OsnuWgVDi37ykxUQog602BufTdt2hQTExNSU1PLrU9NTcXVtXIf4mZmZnTs2JEzZ85UuF2j0aDRlD1TzMrKqn7AolqS0vN48oe9xF3Kxd7SjO9Hd6GLj2O1jtXZ25HOTziScDmXH3bE8dv+c5xIzuKlZYcBsKKAYVYneWbQ3bTtePWPX20GSbv1Pzu3Af/7wD8cPLuVDT4ihBD1lFG7Z5mbm9O5c2c2bdpkWKfT6di0aVO5GvbNaLVaoqOjcXNzq60wxW04ej6TYV/uIu5SLs0dLFn+XGi1k/S1vJ2smTWkHZHT7+WV8ACcrzYwe9fuDz7RfUjbC8vLCnt0gfvnwZRoeD4S7nsbfMIkSQshGgSjd8+aOnUqo0ePpkuXLnTr1o158+aRm5vLU089BcCTTz5J8+bNmT17NgBvv/023bt3p2XLlmRkZDBnzhwSEhJ45plnjHkZogLbT13kuZ8PkFukJdDNjkVPdcXFrob6vBdkwsk1OBxdzoRe03nmrt4cSLhCpyIT2HBUPytVKbUJdHmqZs4rhBB1zOiJesSIEVy8eJEZM2aQkpJChw4dWLdunaGBWWJiImp1WcX/ypUrjBs3jpSUFJo0aULnzp3ZtWsXbdq0MdYliAosP3COacuPUKJTCGvpxFePd8bO4jZrsEW5+kFHjv6h70qlLdKvd/JH49GFHi2aghIOrcOl8ZcQotEwej/quib9qGuXoih8ufUsc9brB6QZ0sGdOQ8FY25azacsxQX6pHx0OcSsg5L8sm1NA/RDdQY9JONoCyEalAbTmEw0LlqdwsxVx/hpdwIA/7nbj2n9W+u7RlVFSZF+DO2jy/XTRBZd0/e9ia8+Obcbrm8YJjVnIUQjJ4lacDmnkLhLuVhrTLE2N8VKY4KNxhSNqbrSw7MWFGt5Yckh1h9LRaWCGfe34akw36oHc3INrHweCjLK1tl5QLth0Ha4vq+zJGchxB1EEvUdbsfpSzzzv30UFOuu26ZWgbW5KdYaffLW/2xyNZmbYqMxwcrcFGtzE3acucTBxAzMTdXMG9GBgUGVbIV/JQF0JWW3rh1b6JO0tTO0HaqvPXt0A3W9mD9GCCHqnCTqO9jWmDSe/ekARSU6mtrouzflFZWQV6QFQKdAdmEJ2YUllTqenYUp3z7ZhRA/p8oFsOcb+PtVaDsMHl6oX+fcGsauB4+u+tbaQghxh5NEfYfacjKN//x0gCKtjvvauPDFY50MDb50OoW8Yi15hSXkFOoTd25hCblFJeQWaskrKiGnUL899+o2tQqeCPWmpXPF428DkB6n/9fx6i1xr+6AAkU5+hmrSmvNXt1r78KFEKKBkUR9B9p4PJXnfzlIkVZH/7aufP5YR8PMVQBqtQobjSk2GlNue1DNkiI4uRoO/qhvIBb8GAz7Sr/NrT1MOQoOnrd7FiGEaLQkUd9hNhxLYcLigxRrFQYFuTHv0Q7lknSNuXRGn5yjFkPepasrVVCYBYpS1iBMkrQQQtyUJOo7yLqjyUxcfIgSncLgYHc+eSQY05pM0sUFcOIvfYKO/6dsvY0rdHoCOj4BTbxr7nxCCHEHkER9h1hzJJnJSw6h1SkM7eDO3IdrMEmnnYCDP8HhxZB/Rb9OpdZPGdl5DPj3AxP5UxNCiOqQT887wKrDF3hxaRRancLwTs2Z81AwJlUdhKQiJYWwcCCc31+2zq45dHoSOj4O9jLymxBC3C5J1I3cykPnmfpbFDoFHu7swfsPtq9+ktbpIO04uLbTL5tqwNwK1KbQqr8+QbfsK92qhBCiBkmibsSWHzjHy78fRlHg0a6evDcsqOrDeZYqyISv74aMJJh6Amz1k6YwYA5YOYFNs5oLXAghhIEM99RI/bY/yZCkHwvxqnqSLi6AxN1lyxb2+tHCzK0hNbpsvXNrSdJCCFGLpEbdCP26N5Hpf+iT6RPdvXl7SNvKjdmtKJB8GA79DNG/QXE+vBQDVo767cMWgK2b/na3EEKIOiGJupH5eXcCb6w8CsCYHj68NbjNrZN07iWI/l2foK+tLdt5QHpsWaKWqSSFEKLOSaJuRP4XGc+MP48B8HRPX94YFFhxki4pgnN74exmOLsFLhwCrk5LbmIOre/Xt9r26yUNw4QQwsgkUTcSC3fGMeuv4wA8e7cf0we0vj5JH/oFTqyCuH+gOLf8Nrdg/YAk7R4sq0ELIYQwOknUjcB3/8Tyf2tOAPBcrxa8Gh6AKv8KxO+AwMFlw3We3Qyn1ul/tmoKLe6FFr3BrzfYVXJaSiGEEHVKEnUDlFNYwsGEK+yPT2dPnP4FChN7+/NSv1aodCXwSTt9rfn53eAcqN+xw2PgGqRP0C7tZI5nIYRoACRRNwBpWQXsi7/Cvvh09iekc/xCFiZKCUGqWDqrT/Gs2Qna2Bfj2m+H/na3iZl+qsjsZMi7XHagln30LyGEEA2GJOp6RlEUYi/lsi8unX3xV9ifkE7C5TzsyKWT+jT91TG8aRZDB3UsGorKdsxBn5jt3PXLI3/VjxwmhBCiQZNEbWTFWh3HLmSxPz6dvXHp7E+4QnpuEe5coos6hmfUMXQ1j6GV+hzq0pbZpaycwCsUPEP0LbRtXMu2SZIWQohGQRJ1HVIUhXNX8olKyiAqKYPDSRkcvZBJUXEJtuSRiQ0ALU3T2Gg65foDOLbQJ2avEP2/Ti3LGooJIYRolOpFov7iiy+YM2cOKSkpBAcH8/nnn9OtW7cbll+2bBlvvvkm8fHx+Pv788EHHzBw4MA6jLhyMvKKOHwuk6jEDA6f0yfm7NxcVCgUYg5AuHovcy2+5qR1Vw52m0cXH0faudvCZx/qW2J7dtc/b/bqDjbORr4iIYQQdc3oiXrp0qVMnTqVBQsWEBISwrx58wgPDycmJgZn5+sT065duxg5ciSzZ8/m/vvvZ/HixQwdOpSDBw/Srl07I1yBXmGJluMXsjh8TW05/3ISrdWJBKqSGKJOZJoqkRaaC3xu+yJX/IfTwdOBELUZtivn0dXiAl3vuWbkrylH9I3ChBBC3NFUiqIoty5We0JCQujatSvz588HQKfT4enpyaRJk3jttdeuKz9ixAhyc3NZvXq1YV337t3p0KEDCxYsuOX5zp07h6enJ0lJSXh43N58yauPXGBfXDrHE1PQpp6gpZJAoCqR1ip9gm6iyql4x55Toe9b+p+LcuHymavdpWQUMCGEuBNUJRcZtUZdVFTEgQMHmD59umGdWq2mb9++REZGVrhPZGQkU6dOLbcuPDyclStX1maoFVq0M57xF/7LW+oo1KbXf99RVCaomrYCl7ZXX+30/5a2zAb9bFRuwXUYtRBCiIbEqIn60qVLaLVaXFxcyq13cXHh5MmTFe6TkpJSYfmUlJQKyxcWFlJYWGhYzs7Ovs2oyzzQwR2X4mao0xW0lk6o3YJQlSZjl7aomgaAmUWNnU8IIcSdx+jPqGvb7NmzmTVrVq0c+8lQH2g9F8wsMZGGXkIIIWqBUceQbNq0KSYmJqSmppZbn5qaiqura4X7uLq6Vqn89OnTyczMNLyOHz9eM8GXauItrbGFEELUGqMmanNzczp37symTZsM63Q6HZs2bSI0NLTCfUJDQ8uVB4iIiLhheY1Gg52dneFla2tbcxcghBBC1DKj3/qeOnUqo0ePpkuXLnTr1o158+aRm5vLU089BcCTTz5J8+bNmT17NgAvvPAC99xzDx999BGDBg1iyZIl7N+/n2+++caYlyGEEELUCqMn6hEjRnDx4kVmzJhBSkoKHTp0YN26dYYGY4mJiaivmeWpR48eLF68mDfeeIP//ve/+Pv7s3LlSqP2oRZCCCFqi9H7Ude1muxHLYQQQlRHVXKRTEgshBBC1GNGv/Vd13Q6HQDJyclGjkQIIcSdqjQHleakm7njEnVp166bTfohhBBC1IXU1FS8vLxuWuaOe0ZdUlLCoUOHcHFxKddIrTqys7Np06YNx48fl25fQgjRyNXkZ75OpyM1NZWOHTtianrzOvMdl6hrUlZWFvb29mRmZmJnZ2fscIQQQtQiY33mS2MyIYQQoh6TRC2EEELUY5Kob4NGo+Gtt95Co9EYOxQhhBC1zFif+fKMWgghhKjHpEYthBBC1GOSqIUQQoh6TBK1EEIIUY9Jor4NX3zxBT4+PlhYWBASEsLevXuNHZIQQogatn37dgYPHoy7uzsqlYqVK1fW6fklUVfT0qVLmTp1Km+99RYHDx4kODiY8PBw0tLSjB2aEEKIGpSbm0twcDBffPGFUc4vrb6rKSQkhK5duzJ//nxAPxycp6cnkyZN4rXXXjNydEIIIWqDSqVixYoVDB06tM7OKTXqaigqKuLAgQP07dvXsE6tVtO3b18iIyONGJkQQojGRhJ1NVy6dAmtVouLi0u59S4uLqSkpBgpKiGEEI2RJGohhBCiHpNEXQ1NmzbFxMTEMLd1qdTUVFxdXY0UlRBCiMZIEnU1mJub07lzZzZt2mRYp9Pp2LRpE6GhoUaMTAghRGNz89mqxQ1NnTqV0aNH06VLF7p168a8efPIzc3lqaeeMnZoQgghalBOTg5nzpwxLMfFxREVFYWjoyNeXl61fn7pnnUb5s+fz5w5c0hJSaFDhw589tlnhISEGDssIYQQNWjr1q307t37uvWjR49m0aJFtX5+SdRCCCFEPSbPqIUQQoh6TBK1EEIIUY9JohZCCCHqMUnUQgghRD0miVoIIYSoxyRRCyGEEPWYJGohhBCiHpNELYQQQtRjkqiFELVGpVKxcuVKY4chRIMmiVqIRmrMmDGoVKrrXv379zd2aEKIKpBJOYRoxPr378/ChQvLrdNoNEaKRghRHVKjFqIR02g0uLq6lns1adIE0N+W/uqrrxgwYACWlpb4+fnx+++/l9s/Ojqae++9F0tLS5ycnHj22WfJyckpV+aHH36gbdu2aDQa3NzcmDhxYrntly5dYtiwYVhZWeHv78+qVasM265cucKoUaNo1qwZlpaW+Pv7X/fFQog7nSRqIe5gb775Jg8++CCHDx9m1KhRPProo5w4cQKA3NxcwsPDadKkCfv27WPZsmVs3LixXCL+6quvmDBhAs8++yzR0dGsWrWKli1bljvHrFmzeOSRRzhy5AgDBw5k1KhRpKenG85//Phx/v77b06cOMFXX31F06ZN6+4XIERDoAghGqXRo0crJiYmirW1dbnXu+++qyiKogDK+PHjy+0TEhKiPPfcc4qiKMo333yjNGnSRMnJyTFsX7NmjaJWq5WUlBRFURTF3d1def31128YA6C88cYbhuWcnBwFUP7++29FURRl8ODBylNPPVUzFyxEIyXPqIVoxHr37s1XX31Vbp2jo6Ph59DQ0HLbQkNDiYqKAuDEiRMEBwdjbW1t2B4WFoZOpyMmJgaVSsWFCxfo06fPTWNo37694Wdra2vs7OxIS0sD4LnnnuPBBx/k4MGD9OvXj6FDh9KjR49qXasQjZUkaiEaMWtr6+tuRdcUS0vLSpUzMzMrt6xSqdDpdAAMGDCAhIQE1q5dS0REBH369GHChAnMnTu3xuMVoqGSZ9RC3MF279593XJgYCAAgYGBHD58mNzcXMP2nTt3olarCQgIwNbWFh8fHzZt2nRbMTRr1ozRo0fz888/M2/ePL755pvbOp4QjY3UqIVoxAoLC0lJSSm3ztTU1NBga9myZXTp0oWePXvyyy+/sHfvXr7//nsARo0axVtvvcXo0aOZOXMmFy9eZNKkSTzxxBO4uLgAMHPmTMaPH4+zszMDBgwgOzubnTt3MmnSpErFN2PGDDp37kzbtm0pLCxk9erVhi8KQgg9SdRCNGLr1q3Dzc2t3LqAgABOnjwJ6FtkL1myhOeffx43Nzd+/fVX2rRpA4CVlRXr16/nhRdeoGvXrlhZWfHggw/y8ccfG441evRoCgoK+OSTT3j55Zdp2rQpDz30UKXjMzc3Z/r06cTHx2Npacldd93FkiVLauDKhWg8VIqiKMYOQghR91QqFStWrGDo0KHGDkUIcRPyjFoIIYSoxyRRCyGEEPWYPKMW4g4lT72EaBikRi2EEELUY5KohRBCiHpMErUQQghRj0miFkIIIeoxSdRCCCFEPSaJWgghhKjHJFELIYQQ9ZgkaiGEEKIek0QthBBC1GP/D098GJYkfaFAAAAAAElFTkSuQmCC\n", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "train_reward_margins = [i-j for i,j in zip(tracking[\"train_chosen_rewards\"], tracking[\"train_rejected_rewards\"])]\n", + "val_reward_margins = [i-j for i,j in zip(tracking[\"val_chosen_rewards\"], tracking[\"val_rejected_rewards\"])]\n", + "\n", + "plot_losses(\n", + " epochs_seen=epochs_tensor,\n", + " tokens_seen=tracking[\"tokens_seen\"],\n", + " train_losses=train_reward_margins,\n", + " val_losses=val_reward_margins,\n", + " label=\"loss\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "69756011-acd6-404c-a5fc-7fe252cf20c8", + "metadata": { + "id": "69756011-acd6-404c-a5fc-7fe252cf20c8" + }, + "source": [ + "- As we can see, and as it's desired, the reward margins improve; this mirrors the loss curve and is a good sign\n", + "- Note that DPO losses and reward margins are valuable metrics to track during training; however, they don't tell the whole store\n", + "- Lastly, and most importantly, we have to conduct a qualitative check of the responses\n", + "- Here, we will look at the response (in addition, you could use an LLM to score the responses similar to chapter 7)" + ] + }, + { + "cell_type": "code", + "execution_count": 53, + "id": "5EfUXJGOali8", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "5EfUXJGOali8", + "outputId": "7ec7db47-d775-4646-f660-0d7f7e7c8503" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Convert the active sentence to passive: 'The chef cooks the meal every day.'\n", + "\n", + "Correct response:\n", + ">> The meal is cooked by the chef every day.\n", + "\n", + "Reference model response:\n", + ">> The meal is cooked every day by the chef.\n", + "\n", + "Policy model response:\n", + ">> The meal is prepared by the chef.\n", + "\n", + "-------------------------------------\n", + "\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Classify an input string as either a noun or a verb.\n", + "\n", + "### Input:\n", + "Dance\n", + "\n", + "Correct response:\n", + ">> 'Dance' can be classified as a verb.\n", + "\n", + "Reference model response:\n", + ">> \"Dance\" can be classified as a verb.\n", + "\n", + "Policy model response:\n", + ">> The input string \"Dance\" could be classified as a verb.\n", + "\n", + "-------------------------------------\n", + "\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Rewrite the sentence using a metaphor.\n", + "\n", + "### Input:\n", + "The book is very interesting.\n", + "\n", + "Correct response:\n", + ">> The book is a page-turner.\n", + "\n", + "Reference model response:\n", + ">> The book is a treat.\n", + "\n", + "Policy model response:\n", + ">> The book is a treat.\n", + "\n", + "-------------------------------------\n", + "\n" + ] + } + ], + "source": [ + "torch.manual_seed(123)\n", + "\n", + "\n", + "for entry in val_data[:3]:\n", + "\n", + " input_text = format_input(entry)\n", + "\n", + " token_ids = generate(\n", + " model=reference_model,\n", + " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", + " max_new_tokens=256,\n", + " context_size=BASE_CONFIG[\"context_length\"],\n", + " eos_id=50256\n", + " )\n", + " generated_text = token_ids_to_text(token_ids, tokenizer)\n", + " reference_response_text = (\n", + " generated_text[len(input_text):]\n", + " .replace(\"### Response:\", \"\")\n", + " .strip()\n", + " )\n", + "\n", + " token_ids = generate(\n", + " model=policy_model,\n", + " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", + " max_new_tokens=256,\n", + " context_size=BASE_CONFIG[\"context_length\"],\n", + " eos_id=50256\n", + " )\n", + " generated_text = token_ids_to_text(token_ids, tokenizer)\n", + " policy_response_text = (\n", + " generated_text[len(input_text):]\n", + " .replace(\"### Response:\", \"\")\n", + " .strip()\n", + " )\n", + "\n", + " print(input_text)\n", + " print(f\"\\nCorrect response:\\n>> {entry['output']}\")\n", + " print(f\"\\nReference model response:\\n>> {reference_response_text.strip()}\")\n", + " print(f\"\\nPolicy model response:\\n>> {policy_response_text.strip()}\")\n", + " print(\"\\n-------------------------------------\\n\")" + ] + }, + { + "cell_type": "markdown", + "id": "RmcKVg0JlHVF", + "metadata": { + "id": "RmcKVg0JlHVF" + }, + "source": [ + "- As we can see based on the reference model and policy model responses above, the optimized model (i.e., the policy model) indeed slightly changed its style compared to the original model (i.e., reference model)\n", + "- For instance, `\"Dance\" can be classified as a verb.` changed to `The input string \"Dance\" could be classified as a verb.` which is a slightly more polite response (the use of \"could\" instead of \"can\" makes the statement sound less assertive and more tentative)" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "id": "jJSwb2hzQwdP", + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "jJSwb2hzQwdP", + "outputId": "6e755db4-9524-42a8-a58b-2218bf03e39a" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Rewrite the sentence using a simile.\n", + "\n", + "### Input:\n", + "The car is very fast.\n", + "\n", + "Correct response:\n", + ">> The car is as fast as lightning.\n", + "\n", + "Reference model response:\n", + ">> The car is as fast as a cheetah.\n", + "\n", + "Policy model response:\n", + ">> The car is as fast as a cheetah.\n", + "\n", + "-------------------------------------\n", + "\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "What type of cloud is typically associated with thunderstorms?\n", + "\n", + "Correct response:\n", + ">> The type of cloud typically associated with thunderstorms is cumulonimbus.\n", + "\n", + "Reference model response:\n", + ">> A thunderstorm is a type of storm that typically produces thunder or lightning.\n", + "\n", + "Policy model response:\n", + ">> The type of cloud typically associated with thunderstorms is a cumulus.\n", + "\n", + "-------------------------------------\n", + "\n", + "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", + "\n", + "### Instruction:\n", + "Name the author of 'Pride and Prejudice'.\n", + "\n", + "Correct response:\n", + ">> Jane Austen.\n", + "\n", + "Reference model response:\n", + ">> The author of 'Pride and Prejudice' is Jane Austen.\n", + "\n", + "Policy model response:\n", + ">> The author of 'Pride and Prejudice' is Jane Austen.\n", + "\n", + "-------------------------------------\n", + "\n" + ] + } + ], + "source": [ + "torch.manual_seed(123)\n", + "\n", + "\n", + "for entry in test_data[:3]:\n", + "\n", + " input_text = format_input(entry)\n", + "\n", + " token_ids = generate(\n", + " model=reference_model,\n", + " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", + " max_new_tokens=256,\n", + " context_size=BASE_CONFIG[\"context_length\"],\n", + " eos_id=50256\n", + " )\n", + " generated_text = token_ids_to_text(token_ids, tokenizer)\n", + " reference_response_text = (\n", + " generated_text[len(input_text):]\n", + " .replace(\"### Response:\", \"\")\n", + " .strip()\n", + " )\n", + "\n", + " token_ids = generate(\n", + " model=policy_model,\n", + " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", + " max_new_tokens=256,\n", + " context_size=BASE_CONFIG[\"context_length\"],\n", + " eos_id=50256\n", + " )\n", + " generated_text = token_ids_to_text(token_ids, tokenizer)\n", + " policy_response_text = (\n", + " generated_text[len(input_text):]\n", + " .replace(\"### Response:\", \"\")\n", + " .strip()\n", + " )\n", + "\n", + " print(input_text)\n", + " print(f\"\\nCorrect response:\\n>> {entry['output']}\")\n", + " print(f\"\\nReference model response:\\n>> {reference_response_text.strip()}\")\n", + " print(f\"\\nPolicy model response:\\n>> {policy_response_text.strip()}\")\n", + " print(\"\\n-------------------------------------\\n\")" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "gpuType": "A100", + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.6" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/ch07/04_preference-tuning-with-dpo/previous_chapters.py b/ch07/04_preference-tuning-with-dpo/previous_chapters.py new file mode 100644 index 00000000..bd693393 --- /dev/null +++ b/ch07/04_preference-tuning-with-dpo/previous_chapters.py @@ -0,0 +1,470 @@ +# Copyright (c) Sebastian Raschka under Apache License 2.0 (see LICENSE.txt). +# Source for "Build a Large Language Model From Scratch" +# - https://www.manning.com/books/build-a-large-language-model-from-scratch +# Code: https://github.com/rasbt/LLMs-from-scratch +# +# This file collects all the relevant code that we covered thus far +# throughout Chapters 2-6. +# This file can be run as a standalone script. + + +import matplotlib.pyplot as plt +from matplotlib.ticker import MaxNLocator +import numpy as np +import tiktoken +import torch +import torch.nn as nn +from torch.utils.data import Dataset, DataLoader + + +##################################### +# Chapter 2 +##################################### + + +class GPTDatasetV1(Dataset): + def __init__(self, txt, tokenizer, max_length, stride): + self.tokenizer = tokenizer + self.input_ids = [] + self.target_ids = [] + + # Tokenize the entire text + token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"}) + + # Use a sliding window to chunk the book into overlapping sequences of max_length + for i in range(0, len(token_ids) - max_length, stride): + input_chunk = token_ids[i:i + max_length] + target_chunk = token_ids[i + 1: i + max_length + 1] + self.input_ids.append(torch.tensor(input_chunk)) + self.target_ids.append(torch.tensor(target_chunk)) + + def __len__(self): + return len(self.input_ids) + + def __getitem__(self, idx): + return self.input_ids[idx], self.target_ids[idx] + + +def create_dataloader_v1(txt, batch_size=4, max_length=256, + stride=128, shuffle=True, drop_last=True, num_workers=0): + # Initialize the tokenizer + tokenizer = tiktoken.get_encoding("gpt2") + + # Create dataset + dataset = GPTDatasetV1(txt, tokenizer, max_length, stride) + + # Create dataloader + dataloader = DataLoader( + dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last, num_workers=num_workers) + + return dataloader + + +##################################### +# Chapter 3 +##################################### +class MultiHeadAttention(nn.Module): + def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False): + super().__init__() + assert d_out % num_heads == 0, "d_out must be divisible by n_heads" + + self.d_out = d_out + self.num_heads = num_heads + self.head_dim = d_out // num_heads # Reduce the projection dim to match desired output dim + + self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias) + self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias) + self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias) + self.out_proj = nn.Linear(d_out, d_out) # Linear layer to combine head outputs + self.dropout = nn.Dropout(dropout) + self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1)) + + def forward(self, x): + b, num_tokens, d_in = x.shape + + keys = self.W_key(x) # Shape: (b, num_tokens, d_out) + queries = self.W_query(x) + values = self.W_value(x) + + # We implicitly split the matrix by adding a `num_heads` dimension + # Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim) + keys = keys.view(b, num_tokens, self.num_heads, self.head_dim) + values = values.view(b, num_tokens, self.num_heads, self.head_dim) + queries = queries.view(b, num_tokens, self.num_heads, self.head_dim) + + # Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim) + keys = keys.transpose(1, 2) + queries = queries.transpose(1, 2) + values = values.transpose(1, 2) + + # Compute scaled dot-product attention (aka self-attention) with a causal mask + attn_scores = queries @ keys.transpose(2, 3) # Dot product for each head + + # Original mask truncated to the number of tokens and converted to boolean + mask_bool = self.mask.bool()[:num_tokens, :num_tokens] + + # Use the mask to fill attention scores + attn_scores.masked_fill_(mask_bool, -torch.inf) + + attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1) + attn_weights = self.dropout(attn_weights) + + # Shape: (b, num_tokens, num_heads, head_dim) + context_vec = (attn_weights @ values).transpose(1, 2) + + # Combine heads, where self.d_out = self.num_heads * self.head_dim + context_vec = context_vec.reshape(b, num_tokens, self.d_out) + context_vec = self.out_proj(context_vec) # optional projection + + return context_vec + + +##################################### +# Chapter 4 +##################################### +class LayerNorm(nn.Module): + def __init__(self, emb_dim): + super().__init__() + self.eps = 1e-5 + self.scale = nn.Parameter(torch.ones(emb_dim)) + self.shift = nn.Parameter(torch.zeros(emb_dim)) + + def forward(self, x): + mean = x.mean(dim=-1, keepdim=True) + var = x.var(dim=-1, keepdim=True, unbiased=False) + norm_x = (x - mean) / torch.sqrt(var + self.eps) + return self.scale * norm_x + self.shift + + +class GELU(nn.Module): + def __init__(self): + super().__init__() + + def forward(self, x): + return 0.5 * x * (1 + torch.tanh( + torch.sqrt(torch.tensor(2.0 / torch.pi)) * + (x + 0.044715 * torch.pow(x, 3)) + )) + + +class FeedForward(nn.Module): + def __init__(self, cfg): + super().__init__() + self.layers = nn.Sequential( + nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]), + GELU(), + nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]), + ) + + def forward(self, x): + return self.layers(x) + + +class TransformerBlock(nn.Module): + def __init__(self, cfg): + super().__init__() + self.att = MultiHeadAttention( + d_in=cfg["emb_dim"], + d_out=cfg["emb_dim"], + context_length=cfg["context_length"], + num_heads=cfg["n_heads"], + dropout=cfg["drop_rate"], + qkv_bias=cfg["qkv_bias"]) + self.ff = FeedForward(cfg) + self.norm1 = LayerNorm(cfg["emb_dim"]) + self.norm2 = LayerNorm(cfg["emb_dim"]) + self.drop_resid = nn.Dropout(cfg["drop_rate"]) + + def forward(self, x): + # Shortcut connection for attention block + shortcut = x + x = self.norm1(x) + x = self.att(x) # Shape [batch_size, num_tokens, emb_size] + x = self.drop_resid(x) + x = x + shortcut # Add the original input back + + # Shortcut connection for feed-forward block + shortcut = x + x = self.norm2(x) + x = self.ff(x) + x = self.drop_resid(x) + x = x + shortcut # Add the original input back + + return x + + +class GPTModel(nn.Module): + def __init__(self, cfg): + super().__init__() + self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"]) + self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"]) + self.drop_emb = nn.Dropout(cfg["drop_rate"]) + + self.trf_blocks = nn.Sequential( + *[TransformerBlock(cfg) for _ in range(cfg["n_layers"])]) + + self.final_norm = LayerNorm(cfg["emb_dim"]) + self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False) + + def forward(self, in_idx): + batch_size, seq_len = in_idx.shape + tok_embeds = self.tok_emb(in_idx) + pos_embeds = self.pos_emb(torch.arange(seq_len, device=in_idx.device)) + x = tok_embeds + pos_embeds # Shape [batch_size, num_tokens, emb_size] + x = self.drop_emb(x) + x = self.trf_blocks(x) + x = self.final_norm(x) + logits = self.out_head(x) + return logits + + +def generate_text_simple(model, idx, max_new_tokens, context_size): + # idx is (B, T) array of indices in the current context + for _ in range(max_new_tokens): + + # Crop current context if it exceeds the supported context size + # E.g., if LLM supports only 5 tokens, and the context size is 10 + # then only the last 5 tokens are used as context + idx_cond = idx[:, -context_size:] + + # Get the predictions + with torch.no_grad(): + logits = model(idx_cond) + + # Focus only on the last time step + # (batch, n_token, vocab_size) becomes (batch, vocab_size) + logits = logits[:, -1, :] + + # Get the idx of the vocab entry with the highest logits value + idx_next = torch.argmax(logits, dim=-1, keepdim=True) # (batch, 1) + + # Append sampled index to the running sequence + idx = torch.cat((idx, idx_next), dim=1) # (batch, n_tokens+1) + + return idx + + +##################################### +# Chapter 5 +##################################### +def generate(model, idx, max_new_tokens, context_size, temperature=0.0, top_k=None, eos_id=None): + + # For-loop is the same as before: Get logits, and only focus on last time step + for _ in range(max_new_tokens): + idx_cond = idx[:, -context_size:] + with torch.no_grad(): + logits = model(idx_cond) + logits = logits[:, -1, :] + + # New: Filter logits with top_k sampling + if top_k is not None: + # Keep only top_k values + top_logits, _ = torch.topk(logits, top_k) + min_val = top_logits[:, -1] + logits = torch.where(logits < min_val, torch.tensor(float('-inf')).to(logits.device), logits) + + # New: Apply temperature scaling + if temperature > 0.0: + logits = logits / temperature + + # Apply softmax to get probabilities + probs = torch.softmax(logits, dim=-1) # (batch_size, context_len) + + # Sample from the distribution + idx_next = torch.multinomial(probs, num_samples=1) # (batch_size, 1) + + # Otherwise same as before: get idx of the vocab entry with the highest logits value + else: + idx_next = torch.argmax(logits, dim=-1, keepdim=True) # (batch_size, 1) + + if idx_next == eos_id: # Stop generating early if end-of-sequence token is encountered and eos_id is specified + break + + # Same as before: append sampled index to the running sequence + idx = torch.cat((idx, idx_next), dim=1) # (batch_size, num_tokens+1) + + return idx + + +def train_model_simple(model, train_loader, val_loader, optimizer, device, num_epochs, + eval_freq, eval_iter, start_context, tokenizer): + # Initialize lists to track losses and tokens seen + train_losses, val_losses, track_tokens_seen = [], [], [] + tokens_seen, global_step = 0, -1 + + # Main training loop + for epoch in range(num_epochs): + model.train() # Set model to training mode + + for input_batch, target_batch in train_loader: + optimizer.zero_grad() # Reset loss gradients from previous batch iteration + loss = calc_loss_batch(input_batch, target_batch, model, device) + loss.backward() # Calculate loss gradients + optimizer.step() # Update model weights using loss gradients + tokens_seen += input_batch.numel() + global_step += 1 + + # Optional evaluation step + if global_step % eval_freq == 0: + train_loss, val_loss = evaluate_model( + model, train_loader, val_loader, device, eval_iter) + train_losses.append(train_loss) + val_losses.append(val_loss) + track_tokens_seen.append(tokens_seen) + print(f"Ep {epoch+1} (Step {global_step:06d}): " + f"Train loss {train_loss:.3f}, Val loss {val_loss:.3f}") + + # Print a sample text after each epoch + generate_and_print_sample( + model, tokenizer, device, start_context + ) + + return train_losses, val_losses, track_tokens_seen + + +def evaluate_model(model, train_loader, val_loader, device, eval_iter): + model.eval() + with torch.no_grad(): + train_loss = calc_loss_loader(train_loader, model, device, num_batches=eval_iter) + val_loss = calc_loss_loader(val_loader, model, device, num_batches=eval_iter) + model.train() + return train_loss, val_loss + + +def generate_and_print_sample(model, tokenizer, device, start_context): + model.eval() + context_size = model.pos_emb.weight.shape[0] + encoded = text_to_token_ids(start_context, tokenizer).to(device) + with torch.no_grad(): + token_ids = generate_text_simple( + model=model, idx=encoded, + max_new_tokens=50, context_size=context_size + ) + decoded_text = token_ids_to_text(token_ids, tokenizer) + print(decoded_text.replace("\n", " ")) # Compact print format + model.train() + + +def assign(left, right): + if left.shape != right.shape: + raise ValueError(f"Shape mismatch. Left: {left.shape}, Right: {right.shape}") + return torch.nn.Parameter(torch.tensor(right)) + + +def load_weights_into_gpt(gpt, params): + gpt.pos_emb.weight = assign(gpt.pos_emb.weight, params['wpe']) + gpt.tok_emb.weight = assign(gpt.tok_emb.weight, params['wte']) + + for b in range(len(params["blocks"])): + q_w, k_w, v_w = np.split( + (params["blocks"][b]["attn"]["c_attn"])["w"], 3, axis=-1) + gpt.trf_blocks[b].att.W_query.weight = assign( + gpt.trf_blocks[b].att.W_query.weight, q_w.T) + gpt.trf_blocks[b].att.W_key.weight = assign( + gpt.trf_blocks[b].att.W_key.weight, k_w.T) + gpt.trf_blocks[b].att.W_value.weight = assign( + gpt.trf_blocks[b].att.W_value.weight, v_w.T) + + q_b, k_b, v_b = np.split( + (params["blocks"][b]["attn"]["c_attn"])["b"], 3, axis=-1) + gpt.trf_blocks[b].att.W_query.bias = assign( + gpt.trf_blocks[b].att.W_query.bias, q_b) + gpt.trf_blocks[b].att.W_key.bias = assign( + gpt.trf_blocks[b].att.W_key.bias, k_b) + gpt.trf_blocks[b].att.W_value.bias = assign( + gpt.trf_blocks[b].att.W_value.bias, v_b) + + gpt.trf_blocks[b].att.out_proj.weight = assign( + gpt.trf_blocks[b].att.out_proj.weight, + params["blocks"][b]["attn"]["c_proj"]["w"].T) + gpt.trf_blocks[b].att.out_proj.bias = assign( + gpt.trf_blocks[b].att.out_proj.bias, + params["blocks"][b]["attn"]["c_proj"]["b"]) + + gpt.trf_blocks[b].ff.layers[0].weight = assign( + gpt.trf_blocks[b].ff.layers[0].weight, + params["blocks"][b]["mlp"]["c_fc"]["w"].T) + gpt.trf_blocks[b].ff.layers[0].bias = assign( + gpt.trf_blocks[b].ff.layers[0].bias, + params["blocks"][b]["mlp"]["c_fc"]["b"]) + gpt.trf_blocks[b].ff.layers[2].weight = assign( + gpt.trf_blocks[b].ff.layers[2].weight, + params["blocks"][b]["mlp"]["c_proj"]["w"].T) + gpt.trf_blocks[b].ff.layers[2].bias = assign( + gpt.trf_blocks[b].ff.layers[2].bias, + params["blocks"][b]["mlp"]["c_proj"]["b"]) + + gpt.trf_blocks[b].norm1.scale = assign( + gpt.trf_blocks[b].norm1.scale, + params["blocks"][b]["ln_1"]["g"]) + gpt.trf_blocks[b].norm1.shift = assign( + gpt.trf_blocks[b].norm1.shift, + params["blocks"][b]["ln_1"]["b"]) + gpt.trf_blocks[b].norm2.scale = assign( + gpt.trf_blocks[b].norm2.scale, + params["blocks"][b]["ln_2"]["g"]) + gpt.trf_blocks[b].norm2.shift = assign( + gpt.trf_blocks[b].norm2.shift, + params["blocks"][b]["ln_2"]["b"]) + + gpt.final_norm.scale = assign(gpt.final_norm.scale, params["g"]) + gpt.final_norm.shift = assign(gpt.final_norm.shift, params["b"]) + gpt.out_head.weight = assign(gpt.out_head.weight, params["wte"]) + + +def text_to_token_ids(text, tokenizer): + encoded = tokenizer.encode(text, allowed_special={"<|endoftext|>"}) + encoded_tensor = torch.tensor(encoded).unsqueeze(0) # add batch dimension + return encoded_tensor + + +def token_ids_to_text(token_ids, tokenizer): + flat = token_ids.squeeze(0) # remove batch dimension + return tokenizer.decode(flat.tolist()) + + +def calc_loss_batch(input_batch, target_batch, model, device): + input_batch, target_batch = input_batch.to(device), target_batch.to(device) + logits = model(input_batch) + loss = torch.nn.functional.cross_entropy(logits.flatten(0, 1), target_batch.flatten()) + return loss + + +def calc_loss_loader(data_loader, model, device, num_batches=None): + total_loss = 0. + if len(data_loader) == 0: + return float("nan") + elif num_batches is None: + num_batches = len(data_loader) + else: + # Reduce the number of batches to match the total number of batches in the data loader + # if num_batches exceeds the number of batches in the data loader + num_batches = min(num_batches, len(data_loader)) + for i, (input_batch, target_batch) in enumerate(data_loader): + if i < num_batches: + loss = calc_loss_batch(input_batch, target_batch, model, device) + total_loss += loss.item() + else: + break + return total_loss / num_batches + + +def plot_losses(epochs_seen, tokens_seen, train_losses, val_losses, label="loss"): + fig, ax1 = plt.subplots(figsize=(5, 3)) + + # Plot training and validation loss against epochs + ax1.plot(epochs_seen, train_losses, label=f"Training {label}") + ax1.plot(epochs_seen, val_losses, linestyle="-.", label=f"Validation {label}") + ax1.set_xlabel("Epochs") + ax1.set_ylabel(label.capitalize()) + ax1.legend() + ax1.xaxis.set_major_locator(MaxNLocator(integer=True)) # only show integer labels on x-axis + + # Create a second x-axis for tokens seen + ax2 = ax1.twiny() # Create a second x-axis that shares the same y-axis + ax2.plot(tokens_seen, train_losses, alpha=0) # Invisible plot for aligning ticks + ax2.set_xlabel("Tokens seen") + + fig.tight_layout() # Adjust layout to make room + plt.savefig(f"{label}-plot.pdf") + plt.show() diff --git a/ch07/README.md b/ch07/README.md index a006469e..ca001aa0 100644 --- a/ch07/README.md +++ b/ch07/README.md @@ -10,6 +10,6 @@ - [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API. -- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with DPO (in progress) +- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with Direct Preference Optimization (DPO) - [05_dataset-generation](05_dataset-generation) contains code to generate synthetic datasets for instruction finetuning