Skip to content

Allegro is a powerful text-to-video model that generates high-quality videos up to 6 seconds at 15 FPS and 720p resolution from simple text input.

License

Notifications You must be signed in to change notification settings

pinokiofactory/Allegro-txt2vid

 
 

Repository files navigation

Allegro-txt2vid-pinokio

A Pinokio Install Script for the Rhymes Allegro text-to-video model.

Sources

Important Notes

  • Can take over an hour to generate a result. Hopefully will be optimized soon. But yeah, 1 hour on a 3090. 4090 ~30 minutes.

  • My GPU warms me at night.

  • 12GB VRAM may be sufficient to run, although it does briefly spike high in a couple of places, so make sure you haven't disabled SYSMEM fallback.

  • After the inference Steps have completed, there's a vae decode phase that can take quite a while. So don't panic and stop it prematurely!

  • Img2vid is in the works apparently: rhymes-ai#8

  • Yes, it's not (yet) optimized for consumer GPUs: rhymes-ai#20

Original Project README:

Gallery · Hugging Face · Blog · Paper · Discord · Join Waitlist (Try it on Discord!)

Allegro is a powerful text-to-video model that generates high-quality videos up to 6 seconds at 15 FPS and 720p resolution from simple text input.

News 🔥

  • [24/10/30] 🚀 We release multi-card inference code and PAB in Allegro-VideoSys. With VideoSys framework, the inference time can be further reduced to 3 mins (8xH100) and 2 mins (8xH100+PAB). We also opened a PR to the original VideoSys repo.

  • [24/10/29] 🎉Congratulations that Allegro is merged into diffusers! Currently Allegro is supported in 0.32.0-dev0. It will be integrated in the next release version. So for now, please use pip install git+https://github.com/huggingface/diffusers.git@9214f4a3782a74e510eff7e09b59457fe8b63511 to install diffuser dev version. See huggingface for more details.

Model Info

Model Allegro
Description Text-to-Video Generation Model
Download Hugging Face
Parameter VAE: 175M
DiT: 2.8B
Inference Precision VAE: FP32/TF32/BF16/FP16 (best in FP32/TF32)
DiT/T5: BF16/FP32/TF32
Context Length 79.2K
Resolution 720 x 1280
Frames 88
Video Length 6 seconds @ 15 FPS
Single GPU Memory Usage 9.3G BF16 (with cpu_offload)
Inference time 20 mins (single H100) / 3 mins (8xH100)

Quick Start

Single Inference

  1. Download the Allegro GitHub code.

  2. Install the necessary requirements.

    • Ensure Python >= 3.10, PyTorch >= 2.4, CUDA >= 12.4. For details, see requirements.txt.

    • It is recommended to use Anaconda to create a new environment (Python >= 3.10) to run the following example.

  3. Download the Allegro model weights.

  4. Run inference.

    python single_inference.py \
    --user_prompt 'A seaside harbor with bright sunlight and sparkling seawater, with many boats in the water. From an aerial view, the boats vary in size and color, some moving and some stationary. Fishing boats in the water suggest that this location might be a popular spot for docking fishing boats.' \
    --save_path ./output_videos/test_video.mp4 \
    --vae your/path/to/vae \
    --dit your/path/to/transformer \
    --text_encoder your/path/to/text_encoder \
    --tokenizer your/path/to/tokenizer \
    --guidance_scale 7.5 \
    --num_sampling_steps 100 \
    --seed 42

    Use --enable_cpu_offload to offload the model into CPU for less GPU memory cost (about 9.3G, compared to 27.5G if CPU offload is not enabled), but the inference time will increase significantly.

  5. (Optional) Interpolate the video to 30 FPS.

    It is recommended to use EMA-VFI to interpolate the video from 15 FPS to 30 FPS.

    For better visual quality, please use imageio to save the video.

Multi-Card Inference

We release multi-card inference code and PAB in Allegro-VideoSys.

Limitation

  • The model cannot render celebrities, legible text, specific locations, streets or buildings.

Future Plan

  • Multiple GPU inference and further speed up (PAB)
  • Text & Image-To-Video (TI2V) video generation
  • Motion-controlled video generation
  • Visual quality enhancement

Support

If you encounter any problems or have any suggestions, feel free to open an issue or send an email to [email protected].

Citation

Please consider citing our technical report if you find the code and pre-trained models useful for your project.

@article{allegro2024,
  title={Allegro: Open the Black Box of Commercial-Level Video Generation Model},
  author={Yuan Zhou and Qiuyue Wang and Yuxuan Cai and Huan Yang},
  journal={arXiv preprint arXiv:2410.15458},
  year={2024}
}

License

This repo is released under the Apache 2.0 License.

Disclaimer

The Allegro series models are provided on an "AS IS" basis, and we disclaim any liability for consequences or damages arising from your use. Users are kindly advised to ensure compliance with all applicable laws and regulations. This includes, but is not limited to, prohibitions against illegal activities and the generation of content that is violent, pornographic, obscene, or otherwise deemed non-safe, inappropriate, or illegal. By using these models, you agree that we shall not be held accountable for any consequences resulting from your use.

Acknowledgment

We extend our heartfelt appreciation for the great contribution to the open-source community, especially Open-Sora-Plan, as we build our diffusion transformer (DiT) based on Open-Sora-Plan v1.2.

  • Open-Sora-Plan: A project aims to create a simple and scalable repo, to reproduce Sora.
  • Open-Sora: An initiative dedicated to efficiently producing high-quality video.
  • ColossalAI: A powerful large model parallel acceleration and optimization system.
  • VideoSys: An open-source project that provides a user-friendly and high-performance infrastructure for video generation.
  • DiT: Scalable Diffusion Models with Transformers.
  • PixArt: An open-source DiT-based text-to-image model.
  • StabilityAI VAE: A powerful image VAE model.
  • CLIP: A powerful text-image embedding model.
  • T5: A powerful text encoder.
  • Playground: A state-of-the-art open-source model in text-to-image generation.
  • EMA-VFI: A video frame interpolation model.

About

Allegro is a powerful text-to-video model that generates high-quality videos up to 6 seconds at 15 FPS and 720p resolution from simple text input.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%