Skip to content

Releases: turboderp-org/exllamav2

0.0.19

19 Apr 06:44
ed118b4
Compare
Choose a tag to compare
  • More accurate Q4 cache using groupwise rotations
  • Better prompt ingestion speed when using flash-attn
  • Minor fixes related to issues quantizing Llama 3
  • New, more robust optimizer
  • Fix bug on long-sequence inference for GPTQ models

Full Changelog: v0.0.18...v0.0.19

0.0.18

07 Apr 18:41
dafb508
Compare
Choose a tag to compare
  • Support for Command-R-plus
  • Fix for pre-AVX2 CPUs
  • VRAM optimizations for quantization
  • Very preliminary multimodal support
  • Various other small fixes and optimizations

Full Changelog: v0.0.17...v0.0.18

0.0.17

31 Mar 03:19
Compare
Choose a tag to compare

Mostly just minor fixes and support for DBRX models.

Full Changelog: v0.0.16...v0.0.17

0.0.16

20 Mar 07:23
Compare
Choose a tag to compare
  • Adds support for Cohere models
  • N-gram decoding
  • A few bugfixes
  • Lots of optimizations

Full Changelog: v0.0.15...v0.0.16

0.0.15

07 Mar 02:26
Compare
Choose a tag to compare
  • Adds Q4 cache mode
  • Support for StarCoder2
  • Minor optimizations and a couple of bugfixes

Full Changelog: v0.0.14...v0.0.15

0.0.14

24 Feb 05:54
Compare
Choose a tag to compare

Adds support for Qwen1.5 and Gemma architectures.

Various fixes and optimizations.

Full Changelog since 0.0.13: v0.0.13...v0.0.14

0.0.13.post2

15 Feb 00:28
Compare
Choose a tag to compare

0.0.13.post1

04 Feb 23:11
Compare
Choose a tag to compare

Fixes inference on models with vocab sizes that are not multiples of 32

0.0.13

02 Feb 18:17
Compare
Choose a tag to compare

This release is mostly to update the prebuilt wheels to Torch 2.2, since it won't load extensions built for earlier versions.

Adds dynamic temperature and quadratic sampling. Fixes performance degradation on some GPUs after batch optimizations and various other little things.

0.0.12

22 Jan 20:04
Compare
Choose a tag to compare

Lots of fixes and tweaks. Main feature updates:

Model support:

  • Basic LoRA support for MoE models
  • Support for Orion models (also groundwork for other layernorm models)
  • Support for loading/converting from Axolotl checkpoints

Generation/sampling:

  • Fused kernels enabled for num_experts = 4
  • Option to return probs from streaming generator
  • Add top-A sampling
  • Add freq/pres penalties
  • CFG support in streaming generator
  • Disable flash-attn for non-causal attention (fixes left-padding until FA2 implements custom bias)

Testing/evaluation:

  • HumanEval test
  • Script to compare two models layer by layer (e.g. quantized vs. original model)
  • "Standard" ppl test that attempts to mimic text-generation-webui

Conversion:

  • VRAM optimizations
  • Optimized quantization kernels

IO:

  • Cache safetensors context managers for faster loading
  • Optional direct IO loader (for very fast arrays)