Skip to content

Commit

Permalink
fixes links etc after reorg
Browse files Browse the repository at this point in the history
  • Loading branch information
djliden committed Sep 12, 2024
1 parent c2dba16 commit a08cbe4
Show file tree
Hide file tree
Showing 9 changed files with 1,341 additions and 12 deletions.
8 changes: 4 additions & 4 deletions notebooks/_config.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
title: LLM Fine-Tuning
author: Dan Liden
#logo: logo.png
title: Notebooks
author: Daniel Liden
logo: logo.jpg
execute:
execute_notebooks: 'off'

Expand All @@ -15,7 +15,7 @@ repository:
html:
use_issues_button: true
use_repository_button: true
home_page_in_navbar: false
home_page_in_navbar: true
sphinx:
config:
html_show_copyright: false
10 changes: 5 additions & 5 deletions notebooks/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ root: index
parts:
- caption: AI Training
chapters:
- file: fine-tuning/intro
- file: ai_training/intro
sections:
- file: ai_training/fine_tuning/1_t5_small_single_gpu/1_t5_small_single_gpu
- file: ai_training/fine_tuning/2_gpt2_single_gpu/2_gpt2_single_gpu
- file: ai_training/fine_tuning/3_tinyllama_instruction_tune/3_tinyllama_instruction_tune
- file: ai_training/fine_tuning/1_t5_small_single_gpu/1_T5-Small_on_Single_GPU
- file: ai_training/fine_tuning/2_gpt2_single_gpu/2_GPT2_on_a_single_GPU
- file: ai_training/fine_tuning/3_tinyllama_instruction_tune/3_instruction_tuning_tinyllama_on_a_single_GPU.ipynb
- file: ai_training/fine_tuning/4_olmo_1b_instruction_tune/4_olmo_instruction_tune
- file: ai_training/fine_tuning/5_gemma_2b_axolotl/gemma_2b_axolotl
- file: fine_tuning/appendix
- file: ai_training/appendix
sections:
- file: ai_training/fine_tuning/3_tinyllama_instruction_tune/data_preprocessing
6 changes: 6 additions & 0 deletions notebooks/ai_training/appendix.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Appendix

This appendix contains a collection of miscellaneous resources and supplementary materials that complement the main sections of the AI training materials.

```{tableofcontents}
```
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
"\n",
"Fine-tuning large language models (LLMs) almost always requires multiple GPUs to be practical (or possible at all). But if you're relatively new to deep learning, or you've only trained models on single GPUs before, making the jump to distributed training on multiple GPUs and multiple nodes can be extremely challenging and more than a little frustrating.\n",
"\n",
"As noted in the [readme](../README.md), the goal of this project is to start small and gradually add complexity. So we're not going to start with a \"large language model\" at all. We're starting with a very small model called [t5-small](https://huggingface.co/t5-small). Why start with a small model if we want to train considerably larger models?\n",
"The goal of this project is to start small and gradually add complexity. So we're not going to start with a \"large language model\" at all. We're starting with a very small model called [t5-small](https://huggingface.co/t5-small). Why start with a small model if we want to train considerably larger models?\n",
"- Learning about model fine-tuning is a lot less frustrating if you start from a place of less complexity and are able to get results quickly!\n",
"- When we get to the point of training larger models on distributed systems, we're going to spend a lot of time and energy on *how* to distribute the model, data, etc., across that system. Starting smaller lets us spend some time at the beginning focusing on the training metrics that directly relate to model performance rather than the complexity involved with distributed training. Eventually we will need both, but there's no reason to try to digest all of it all at once!\n",
"- Starting small and then scaling up will give us a solid intuition of how, when, and why to use the various tools and techniques for training larger models or for using more compute resources to train models faster.\n",
Expand All @@ -32,7 +32,7 @@
"t5-small is a 60 million parameter model. This is *small*: the smallest version of GPT2 has more than twice as many parameters (124M); llama2-7b, one of the most commonly-used models at the time of writing, has more than 116 times as many parameters (7B, hence the name). What does this mean for us? Parameter count strongly impacts the amount of memory required to train a model. Eleuther's [Transformer Math blog post](https://blog.eleuther.ai/transformer-math/#training) has a great overview of the memory costs associated with training models of different sizes. We'll get into this in more detail in a later notebook.\n",
"\n",
"## A few things to keep in mind\n",
"Check out the [Readme](README.md) if you haven't already, as it provides important context for this whole project. If you're looking for a set of absolute best practices for how to train particular models, this isn't the place to find them (though I will link them when I come across them, and will try to make improvements where I can, as long as they don't come at the cost of extra complexity!). The goal is to develop a high-level understanding and intuition on model training and fine-tuning, so you can fairly quickly get to something that *works* and then iterate to make it work *better*.\n",
"If you're looking for a set of absolute best practices for how to train particular models, this isn't the place to find them (though I will link them when I come across them, and will try to make improvements where I can, as long as they don't come at the cost of extra complexity!). The goal is to develop a high-level understanding and intuition on model training and fine-tuning, so you can fairly quickly get to something that *works* and then iterate to make it work *better*.\n",
"\n",
"## Compute used in this example\n",
"I am using a `g4dn.4xlarge` AWS ec2 instance, which has a single T4 GPU with 16GB VRAM.\n",
Expand Down
12 changes: 12 additions & 0 deletions notebooks/ai_training/intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# LLM Training/Fine-tuning

Welcome to the AI Training module! This notebook serves as an introduction to the fundamental concepts and techniques used in training artificial intelligence models.

In this module, we'll explore various aspects of AI training, including:
- Data preparation and preprocessing
- Model selection and architecture
- Training algorithms and optimization techniques
- Evaluation metrics and performance assessment

```{tableofcontents}
```
5 changes: 4 additions & 1 deletion notebooks/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Jupyter Notebook Examples
# Guides and Examples

```{attention} This site was previously dedicated solely to fine-tuning LLMs. I have since expanded the scope to include other topics. The process of converting the site is still in process, so you might encounter broken links or other issues. Let me know if you do! You can submit an issue with the GitHub button on the top right.
```

This repository contains a collection of Jupyter notebooks demonstrating various concepts and techniques across different fields. Currently, it includes a series on fine-tuning language models, but it will expand to cover other topics in the future.

Expand Down
Binary file added notebooks/logo.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 9 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
[project]
name = "fine-tuning"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"jupyter-book>=1.0.2",
]
Loading

0 comments on commit a08cbe4

Please sign in to comment.