diff --git a/README.md b/README.md
index d80cc9f..35a14a7 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,12 @@
-# Fine-Tuning LLMs
+# Jupyter Notebook Examples
 
-*View these notebooks in a more readable format at [danliden.com/fine-tuning](https://danliden.com/fine-tuning).*
+*View these notebooks in a more readable format at [danliden.com/notebooks](https://danliden.com/notebooks).*
 
-This series of notebooks is intended to show how to fine-tune language models, starting from smaller models on single-node single-GPU setups and gradually scaling up to multi-GPU and multi-node configurations.
+This repository contains a collection of Jupyter notebooks demonstrating various concepts and techniques across different fields. Currently, it includes a series on fine-tuning language models, but it will expand to cover other topics in the future.
+
+## Fine-Tuning LLMs
+
+The fine-tuning section shows how to fine-tune language models, starting from smaller models on single-node single-GPU setups and gradually scaling up to multi-GPU and multi-node configurations.
 
 Existing examples and learning resources generally do not bridge the practical gap between single-node single-GPU training when all parameters fit in VRAM, and the various forms of distributed training. These examples, when complete, are intended to show how to train smaller models given sufficient compute resources and then scale the models up until we encounter compute and/or memory constraints. We will then introduce various distributed training approaches aimed at overcoming these issues.
 
@@ -10,11 +14,11 @@ This will, hopefully, serve as a practical and conceptual bridge from single-nod
 
 ## How to use this repository
 
-The examples in this repository are intended to be read sequentially. Later examples build on earlier examples and gradually add scale and complexity.
+The examples in this repository are organized by topic. Within each topic, the notebooks are intended to be read sequentially. Later examples often build on earlier examples and gradually add complexity.
 
 ## Contributing
 
 Contributions are welcome, and there are a few different ways to get involved.
-- If you see an error or bug, please [open an issue](https://github.com/djliden/fine-tuning/issues/new) or open a PR.
-- If you have a question about this repository, or you want to request a specific example, please [open an issue](https://github.com/djliden/fine-tuning/issues/new).
-- If you're interested in contributing an example, I encourage you to get in touch. You can [open an issue](https://github.com/djliden/fine-tuning/issues/new) or reach out by email or social media.
\ No newline at end of file
+- If you see an error or bug, please [open an issue](https://github.com/djliden/notebooks/issues/new) or open a PR.
+- If you have a question about this repository, or you want to request a specific example, please [open an issue](https://github.com/djliden/notebooks/issues/new).
+- If you're interested in contributing an example, I encourage you to get in touch. You can [open an issue](https://github.com/djliden/notebooks/issues/new) or reach out by email or social media.
\ No newline at end of file
diff --git a/notebooks/_config.yml b/notebooks/_config.yml
index 232e26d..fa60711 100644
--- a/notebooks/_config.yml
+++ b/notebooks/_config.yml
@@ -1,6 +1,6 @@
 title: LLM Fine-Tuning
 author: Dan Liden
-logo: logo_draft.png
+#logo: logo.png
 execute:
   execute_notebooks: 'off'
 
diff --git a/notebooks/_toc.yml b/notebooks/_toc.yml
index cd41868..9fa2101 100644
--- a/notebooks/_toc.yml
+++ b/notebooks/_toc.yml
@@ -4,13 +4,15 @@
 format: jb-book
 root: index
 parts:
-- caption: Smaller Models (Single GPU)
+- caption: AI Training
   chapters:
-  - file: 1_t5_small_single_gpu/1_T5-Small_on_Single_GPU.ipynb
-  - file: 2_gpt2_single_gpu/2_GPT2_on_a_single_GPU.ipynb
-  - file: 3_tinyllama_instruction_tune/3_instruction_tuning_tinyllama_on_a_single_GPU.ipynb
-  - file: 4_olmo_1b_instruction_tune/4_olmo_instruction_tune.ipynb
-- caption: Other topics of interest
-  chapters:
-  - file: 3_tinyllama_instruction_tune/data_preprocessing.ipynb
-  - file: 5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb
\ No newline at end of file
+  - file: fine-tuning/intro
+    sections:
+    - file: ai_training/fine_tuning/1_t5_small_single_gpu/1_t5_small_single_gpu
+    - file: ai_training/fine_tuning/2_gpt2_single_gpu/2_gpt2_single_gpu
+    - file: ai_training/fine_tuning/3_tinyllama_instruction_tune/3_tinyllama_instruction_tune
+    - file: ai_training/fine_tuning/4_olmo_1b_instruction_tune/4_olmo_instruction_tune
+    - file: ai_training/fine_tuning/5_gemma_2b_axolotl/gemma_2b_axolotl
+  - file: fine_tuning/appendix
+    sections:
+    - file: ai_training/fine_tuning/3_tinyllama_instruction_tune/data_preprocessing
\ No newline at end of file
diff --git a/notebooks/1_t5_small_single_gpu/1_T5-Small_on_Single_GPU.ipynb b/notebooks/ai_training/fine_tuning/1_t5_small_single_gpu/1_T5-Small_on_Single_GPU.ipynb
similarity index 100%
rename from notebooks/1_t5_small_single_gpu/1_T5-Small_on_Single_GPU.ipynb
rename to notebooks/ai_training/fine_tuning/1_t5_small_single_gpu/1_T5-Small_on_Single_GPU.ipynb
diff --git a/notebooks/1_t5_small_single_gpu/t5_small_requirements.txt b/notebooks/ai_training/fine_tuning/1_t5_small_single_gpu/t5_small_requirements.txt
similarity index 100%
rename from notebooks/1_t5_small_single_gpu/t5_small_requirements.txt
rename to notebooks/ai_training/fine_tuning/1_t5_small_single_gpu/t5_small_requirements.txt
diff --git a/notebooks/2_gpt2_single_gpu/2_GPT2_on_a_single_GPU.ipynb b/notebooks/ai_training/fine_tuning/2_gpt2_single_gpu/2_GPT2_on_a_single_GPU.ipynb
similarity index 100%
rename from notebooks/2_gpt2_single_gpu/2_GPT2_on_a_single_GPU.ipynb
rename to notebooks/ai_training/fine_tuning/2_gpt2_single_gpu/2_GPT2_on_a_single_GPU.ipynb
diff --git a/notebooks/2_gpt2_single_gpu/gpt2_requirements.txt b/notebooks/ai_training/fine_tuning/2_gpt2_single_gpu/gpt2_requirements.txt
similarity index 100%
rename from notebooks/2_gpt2_single_gpu/gpt2_requirements.txt
rename to notebooks/ai_training/fine_tuning/2_gpt2_single_gpu/gpt2_requirements.txt
diff --git a/notebooks/3_tinyllama_instruction_tune/3_instruction_tuning_tinyllama_on_a_single_GPU.ipynb b/notebooks/ai_training/fine_tuning/3_tinyllama_instruction_tune/3_instruction_tuning_tinyllama_on_a_single_GPU.ipynb
similarity index 100%
rename from notebooks/3_tinyllama_instruction_tune/3_instruction_tuning_tinyllama_on_a_single_GPU.ipynb
rename to notebooks/ai_training/fine_tuning/3_tinyllama_instruction_tune/3_instruction_tuning_tinyllama_on_a_single_GPU.ipynb
diff --git a/notebooks/3_tinyllama_instruction_tune/data_preprocessing.ipynb b/notebooks/ai_training/fine_tuning/3_tinyllama_instruction_tune/data_preprocessing.ipynb
similarity index 100%
rename from notebooks/3_tinyllama_instruction_tune/data_preprocessing.ipynb
rename to notebooks/ai_training/fine_tuning/3_tinyllama_instruction_tune/data_preprocessing.ipynb
diff --git a/notebooks/3_tinyllama_instruction_tune/tinyllama_requirements.txt b/notebooks/ai_training/fine_tuning/3_tinyllama_instruction_tune/tinyllama_requirements.txt
similarity index 100%
rename from notebooks/3_tinyllama_instruction_tune/tinyllama_requirements.txt
rename to notebooks/ai_training/fine_tuning/3_tinyllama_instruction_tune/tinyllama_requirements.txt
diff --git a/notebooks/4_olmo_1b_instruction_tune/4_olmo_instruction_tune.ipynb b/notebooks/ai_training/fine_tuning/4_olmo_1b_instruction_tune/4_olmo_instruction_tune.ipynb
similarity index 100%
rename from notebooks/4_olmo_1b_instruction_tune/4_olmo_instruction_tune.ipynb
rename to notebooks/ai_training/fine_tuning/4_olmo_1b_instruction_tune/4_olmo_instruction_tune.ipynb
diff --git a/notebooks/4_olmo_1b_instruction_tune/olmo_requirements.txt b/notebooks/ai_training/fine_tuning/4_olmo_1b_instruction_tune/olmo_requirements.txt
similarity index 100%
rename from notebooks/4_olmo_1b_instruction_tune/olmo_requirements.txt
rename to notebooks/ai_training/fine_tuning/4_olmo_1b_instruction_tune/olmo_requirements.txt
diff --git a/notebooks/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb b/notebooks/ai_training/fine_tuning/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb
similarity index 100%
rename from notebooks/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb
rename to notebooks/ai_training/fine_tuning/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb
diff --git a/notebooks/index.md b/notebooks/index.md
index c9c62a8..dba5841 100644
--- a/notebooks/index.md
+++ b/notebooks/index.md
@@ -1,14 +1,16 @@
-# Fine-Tuning LLMs
+# Jupyter Notebook Examples
 
-This series of notebooks is intended to show how to fine-tune language models, starting from smaller models on single-node single-GPU setups and gradually scaling up to multi-GPU and multi-node configurations.
+This repository contains a collection of Jupyter notebooks demonstrating various concepts and techniques across different fields. Currently, it includes a series on fine-tuning language models, but it will expand to cover other topics in the future.
 
-Existing examples and learning resources generally do not bridge the practical gap between single-node single-GPU training when all parameters fit in VRAM, and the various forms of distributed training. These examples, when complete, are intended to show how to train smaller models given sufficient compute resources and then scale the models up until we encounter compute and/or memory constraints. We will then introduce various distributed training approaches aimed at overcoming these issues.
+## AI Training: Fine-Tuning LLMs
 
-This will, hopefully, serve as a practical and conceptual bridge from single-node single-GPU training to distributed training with tools such as deepspeed and FSDP.
+The AI Training section currently focuses on fine-tuning language models. It shows how to fine-tune models starting from smaller, single-GPU setups and gradually scaling up to multi-GPU and multi-node configurations.
+
+These examples aim to bridge the gap between single-node single-GPU training and various forms of distributed training, serving as a practical and conceptual guide for scaling up model training.
 
 ## How to use this repository
 
-The examples in this repository are intended to be read sequentially. Later examples build on earlier examples and gradually add scale and complexity.
+The examples in this repository are organized by topic. Within each topic, the notebooks are intended to be read sequentially. Later examples often build on earlier examples and gradually add complexity.
 
 ```{tableofcontents}
 ```
diff --git a/notebooks/logo_draft.png b/notebooks/logo_draft.png
deleted file mode 100644
index b161bdb..0000000
Binary files a/notebooks/logo_draft.png and /dev/null differ