Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
feifeibear committed Aug 9, 2024
1 parent 6fb0c3b commit 222c9ba
Showing 1 changed file with 11 additions and 3 deletions.
14 changes: 11 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,18 +141,26 @@ Here are the benchmark results for Pixart-Alpha using the 20-step DPM solver as

<h2 id="QuickStart">🚀 QuickStart</h2>

1. Install yunchang for sequence parallel.
### 1. Install from pip

```
pip install xfuser
```

### 2. Install from source

#### 2.1 Install yunchang for sequence parallel.

Install yunchang from [feifeibear/long-context-attention](https://github.com/feifeibear/long-context-attention).
Please note that it has a dependency on flash attention and specific GPU model requirements. We recommend installing yunchang from the source code rather than using `pip install yunchang==0.2.0`.

2. Install xDiT
#### 2.2 Install xDiT

```
python setup.py install
```

3. Usage
### 2. Usage

We provide examples demonstrating how to run models with xDiT in the [./examples/](./examples/) directory.
You can easily modify the model type, model directory, and parallel options in the [examples/run.sh](examples/run.sh) within the script to run some already supported DiT models.
Expand Down

0 comments on commit 222c9ba

Please sign in to comment.