From ecb09a8b213f1b797e7657cbfc0c6ef0886d04c6 Mon Sep 17 00:00:00 2001 From: xrsrke Date: Sun, 10 Dec 2023 09:56:18 +0700 Subject: [PATCH] [Readme] Add multi-model pre-training --- README.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 54a436c..574f6a7 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# 🚧 pipegoose: Large scale 4D parallelism pre-training for 🤗 `transformers` in Mixture of Experts +# 🚧 pipegoose: Large-scale 4D parallelism multi-modal pre-training for 🤗 `transformers` in Mixture of Experts [](https://github.com/xrsrke/pipegoose) [![tests](https://github.com/xrsrke/pipegoose/actions/workflows/tests.yaml/badge.svg)](https://github.com/xrsrke/pipegoose/actions/workflows/tests.yaml) [](https://discord.gg/s9ZS9VXZ3p) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [Codecov](https://app.codecov.io/gh/xrsrke/pipegoose) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/) @@ -6,6 +6,14 @@ +We're building a library for an end-to-end framework for training multi-modal MoE in a decentralized way, as proposed by the paper [DiLoCo](https://arxiv.org/abs/2311.08105). The core papers that we are replicating are: +- DiLoCo: Distributed Low-Communication Training of Language Models [[link]](https://arxiv.org/abs/2311.08105) +- Pipeline MoE: A Flexible MoE Implementation with Pipeline Parallelism [[link]](https://arxiv.org/abs/2304.11414) +- Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity [[link]](https://arxiv.org/abs/2101.03961) +- Flamingo: a Visual Language Model for Few-Shot Learning [[link]](https://arxiv.org/abs/2204.14198) +- Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism [[link]](https://arxiv.org/abs/1909.08053) + + ⚠️ **The project is actively under development, and we're actively seeking collaborators. Come join us: [[discord link]](https://discord.gg/s9ZS9VXZ3p) [[roadmap]](https://github.com/users/xrsrke/projects/5) [[good first issue]](https://github.com/xrsrke/pipegoose/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)** @@ -87,10 +95,10 @@ We did a small scale correctness test by comparing the validation losses between - Distributed Optimizer ZeRO-1 Convergence: [[sgd link]](https://wandb.ai/xariusdrake/pipegoose/runs/fn4t9as4?workspace) [[adam link]](https://wandb.ai/xariusdrake/pipegoose/runs/yn4m2sky) **Features** -- Megatron-style 3D parallelism +- End-to-end multi-modal including in 3D parallelism including distributed CLIP.. - Sequence parallelism and Mixture of Experts that work in 3D parallelism - ZeRO-1: Distributed Optimizer -- Highly optimized CUDA kernels port from Megatron-LM, DeepSpeed +- Kernel fusion - ... **Appreciation**