Skip to content

Releases: keras-team/keras-hub

0.14.0

27 Jun 08:13
Compare
Choose a tag to compare

Summary

  • Add Gemma 2 model!
  • Support loading fine-tuned transformers checkpoints in KerasNLP. Loading Gemma and Llama3 models are supported for now and will convert on the fly.
  • KerasNLP no longer supports Keras 2. Read Getting started with Keras for more information on installing Keras 3 and compatibility with different frameworks. We recommend using KerasNLP with TensorFlow 2.16 or later, as TF 2.16 packages Keras 3 by default.

What's Changed

Full Changelog: v0.12.1...r0.14

v0.12.1

24 May 00:35
4aa0503
Compare
Choose a tag to compare

Summary

  • ⚠️ PaliGemma includes rescaling by default, so images are expected to be passed in the [0, 255] range. This is a backward incompatible change with the original release. Restore the original behavior as follows:
keras_nlp.models.PaliGemmaBackbone.from_preset(
    "pali_gemma_3b_224",
    include_rescaling=False,
)
  • Released the Falcon model.

What's Changed

New Contributors

Full Changelog: v0.12.0...v0.12.1

v0.12.0

21 May 22:09
6339d29
Compare
Choose a tag to compare

Summary

Add PaliGemma, Llama 3, and Phi 3 models.

PaliGemma quickstart, see a complete usage on Kaggle.

pali_gemma_lm = keras_nlp.models.PaliGemmaCausalLM.from_preset(
    "pali_gemma_3b_224"
)
pali_gemma_lm.generate(
    inputs={
        "images": images,
        "prompts": prompts,
    }
)

What's Changed

Full Changelog: v0.11.1...v0.12.0

v0.11.1

03 May 15:12
5860400
Compare
Choose a tag to compare

Summary

  • Add new Code Gemma 1.1 presets, which improve on Code Gemma performance.

What's Changed

Full Changelog: v0.11.0...v0.11.1

v0.11.0

03 May 02:53
4296fd9
Compare
Choose a tag to compare

Summary

This release has no major feature updates, but changes the location our source code is help. Source code is split into a src/ and api/ directory with an explicit API surface similar to core Keras.

When adding or removing new API in a PR, use ./shell/api_gen.sh to update the autogenerated api/ files. See our contributing guide.

What's Changed

New Contributors

Full Changelog: v0.10.0...v0.11.0

v0.10.0

29 Apr 18:16
bd74d8e
Compare
Choose a tag to compare

Summary

  • Added support for Task (CausalLM and Classifier) saving and loading which allows uploading Tasks.
  • Added basic Model Card for Hugging Face upload.
  • Added support for a positions array in our RotaryEmbedding layer.

What's Changed

Full Changelog: v0.9.3...v0.10.0

v0.9.3

10 Apr 21:30
d38494a
Compare
Choose a tag to compare

Patch release with fixes for Llama and Mistral saving.

What's Changed

Full Changelog: v0.9.2...v0.9.3

v0.9.2

09 Apr 03:54
4d10195
Compare
Choose a tag to compare

Summary

  • Initial release of CodeGemma.
  • Bump to a Gemma 1.1 version without download issues on Kaggle.

What's Changed

Full Changelog: v0.9.1...v0.9.2

v0.9.1

06 Apr 02:39
c764f98
Compare
Choose a tag to compare

Patch fix for bug with stop_token_ids.

What's Changed

Full Changelog: v0.9.0...v0.9.1

v0.9.0

06 Apr 00:42
8731d1d
Compare
Choose a tag to compare

The 0.9.0 release adds new models, hub integrations, and general usability improvements.

Summary

  • Added the Gemma 1.1 release.
  • Added the Llama 2, BLOOM and ELECTRA models.
  • Expose new base classes. Allow from_preset() on base classes.
    • keras_nlp.models.Backbone
    • keras_nlp.models.Task
    • keras_nlp.models.Classifier
    • keras_nlp.models.CausalLM
    • keras_nlp.models.Seq2SeqLM
    • keras_nlp.models.MaskedLM
  • Some initial features for uploading to model hubs.
    • backbone.save_to_preset, tokenizer.save_to_preset, keras_nlp.upload_preset.
    • from_preset and upload_preset now work with the Hugging Face Models Hub.
    • More features (task saving, lora saving), and full documentation coming soon.
  • Numerical fixes for the Gemma model at mixed_bfloat16 precision. Thanks unsloth for catching!
# Llama 2. Needs Kaggle consent and login, see https://github.com/Kaggle/kagglehub
causal_lm = keras_nlp.models.LlamaCausalLM.from_preset(
    "llama2_7b_en",
    dtype="bfloat16", # Run at half precision for inference.
)
causal_lm.generate("Keras is a", max_length=128)
# Base class usage.
keras_nlp.models.Classifier.from_preset("bert_base_en", num_classes=2)
keras_nlp.models.Tokenizer.from_preset("gemma_2b_en")
keras_nlp.models.CausalLM.from_preset("gpt2_base_en", dtype="mixed_bfloat16")

What's Changed

New Contributors

Full Changelog: v0.8.2...v0.9.0