From 222c9babc6e538b0a53009f669b1e64fa09fb49c Mon Sep 17 00:00:00 2001 From: feifeibear Date: Fri, 9 Aug 2024 06:11:26 +0000 Subject: [PATCH] update readme --- README.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index c1a56ee6..7c21084a 100644 --- a/README.md +++ b/README.md @@ -141,18 +141,26 @@ Here are the benchmark results for Pixart-Alpha using the 20-step DPM solver as

🚀 QuickStart

-1. Install yunchang for sequence parallel. +### 1. Install from pip + +``` +pip install xfuser +``` + +### 2. Install from source + +#### 2.1 Install yunchang for sequence parallel. Install yunchang from [feifeibear/long-context-attention](https://github.com/feifeibear/long-context-attention). Please note that it has a dependency on flash attention and specific GPU model requirements. We recommend installing yunchang from the source code rather than using `pip install yunchang==0.2.0`. -2. Install xDiT +#### 2.2 Install xDiT ``` python setup.py install ``` -3. Usage +### 2. Usage We provide examples demonstrating how to run models with xDiT in the [./examples/](./examples/) directory. You can easily modify the model type, model directory, and parallel options in the [examples/run.sh](examples/run.sh) within the script to run some already supported DiT models.