Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump transformers from 4.38.2 to 4.39.0 in /docker/ml #25489

Merged
merged 1 commit into from
Mar 21, 2024

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Mar 21, 2024

Bumps transformers from 4.38.2 to 4.39.0.

Release notes

Sourced from transformers's releases.

Release v4.39.0

v4.39.0

🚨 VRAM consumption 🚨

The Llama, Cohere and the Gemma model both no longer cache the triangular causal mask unless static cache is used. This was reverted by #29753, which fixes the BC issues w.r.t speed , and memory consumption, while still supporting compile and static cache. Small note, fx is not supported for both models, a patch will be brought very soon!

New model addition

Cohere open-source model

Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with Cohere's industry-leading Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases. As a model built for companies to implement at scale, Command-R boasts:

  • Strong accuracy on RAG and Tool Use
  • Low latency, and high throughput
  • Longer 128k context and lower pricing
  • Strong capabilities across 10 key languages
  • Model weights available on HuggingFace for research and evaluation

LLaVA-NeXT (llava v1.6)

Llava next is the next version of Llava, which includes better support for non padded images, improved reasoning, OCR, and world knowledge. LLaVA-NeXT even exceeds Gemini Pro on several benchmarks.

Compared with LLaVA-1.5, LLaVA-NeXT has several improvements:

  • Increasing the input image resolution to 4x more pixels. This allows it to grasp more visual details. It supports three aspect ratios, up to 672x672, 336x1344, 1344x336 resolution.
  • Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
  • Better visual conversation for more scenarios, covering different applications.
  • Better world knowledge and logical reasoning.
  • Along with performance improvements, LLaVA-NeXT maintains the minimalist design and data efficiency of LLaVA-1.5. It re-uses the pretrained connector of LLaVA-1.5, and still uses less than 1M visual instruction tuning samples. The largest 34B variant finishes training in ~1 day with 32 A100s.*

LLaVa-NeXT incorporates a higher input resolution by encoding various patches of the input image. Taken from the original paper.

MusicGen Melody

The MusicGen Melody model was proposed in Simple and Controllable Music Generation by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez.

MusicGen Melody is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or audio codes, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform.

Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass.

PvT-v2

The PVTv2 model was proposed in PVT v2: Improved Baselines with Pyramid Vision Transformer by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. As an improved variant of PVT, it eschews position embeddings, relying instead on positional information encoded through zero-padding and overlapping patch embeddings. This lack of reliance on position embeddings simplifies the architecture, and enables running inference at any resolution without needing to interpolate them.

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [transformers](https://github.com/huggingface/transformers) from 4.38.2 to 4.39.0.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.2...v4.39.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 21, 2024
@github-actions github-actions bot enabled auto-merge (squash) March 21, 2024 14:32
@xsoar-bot
Copy link

Docker Image Ready - Dev

Docker automatic build at CircleCI has deployed your docker image: devdemisto/ml:1.0.0.90990
It is available now on docker hub at: https://hub.docker.com/r/devdemisto/ml/tags
Get started by pulling the image:

docker pull devdemisto/ml:1.0.0.90990

Docker Metadata

  • Image Size: 1391.15 MB
  • Image ID: sha256:9c346e8d983867a957f16ca1fa8b240f35b063cf82c20330dc395df81f2ce35c
  • Created: 2024-03-21T14:36:48.060733288Z
  • Arch: linux/amd64
  • Command: ["python3"]
  • Environment:
    • PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    • LANG=C.UTF-8
    • GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D
    • PYTHON_VERSION=3.10.10
    • PYTHON_PIP_VERSION=22.3.1
    • PYTHON_SETUPTOOLS_VERSION=65.5.1
    • PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/d5cb0afaf23b8520f1bbcfed521017b4a95f5c01/public/get-pip.py
    • PYTHON_GET_PIP_SHA256=394be00f13fa1b9aaa47e911bdb59a09c3b2986472130f30aa0bfaf7f3980637
    • DOCKER_IMAGE=devdemisto/ml:1.0.0.90990
    • TRANSFORMERS_CACHE=/ml/.cache
    • NLTK_DATA=/ml/nltk_data
    • MPLCONFIGDIR=/ml/matplotlib
  • Labels:
    • org.opencontainers.image.authors:Demisto <[email protected]>
    • org.opencontainers.image.revision:665399f13083257a82f389c61c8f640e74e5d19a
    • org.opencontainers.image.version:1.0.0.90990

@github-actions github-actions bot merged commit 4b8ae58 into master Mar 21, 2024
8 checks passed
@github-actions github-actions bot deleted the dependabot/pip/docker/ml/transformers-4.39.0 branch March 21, 2024 14:44
@xsoar-bot
Copy link

Docker Image Ready - Production

Docker automatic build at CircleCI has deployed your docker image: demisto/ml:1.0.0.91000
It is available now on docker hub at: https://hub.docker.com/r/demisto/ml/tags
Get started by pulling the image:

docker pull demisto/ml:1.0.0.91000

Docker Metadata

  • Image Size: 1391.15 MB
  • Image ID: sha256:5e278428c76802d2f6196a92ae4b865d603ae7003b003afa3de06c5edf088b88
  • Created: 2024-03-21T14:48:53.416543339Z
  • Arch: linux/amd64
  • Command: ["python3"]
  • Environment:
    • PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    • LANG=C.UTF-8
    • GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D
    • PYTHON_VERSION=3.10.10
    • PYTHON_PIP_VERSION=22.3.1
    • PYTHON_SETUPTOOLS_VERSION=65.5.1
    • PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/d5cb0afaf23b8520f1bbcfed521017b4a95f5c01/public/get-pip.py
    • PYTHON_GET_PIP_SHA256=394be00f13fa1b9aaa47e911bdb59a09c3b2986472130f30aa0bfaf7f3980637
    • DOCKER_IMAGE=demisto/ml:1.0.0.91000
    • TRANSFORMERS_CACHE=/ml/.cache
    • NLTK_DATA=/ml/nltk_data
    • MPLCONFIGDIR=/ml/matplotlib
  • Labels:
    • org.opencontainers.image.authors:Demisto <[email protected]>
    • org.opencontainers.image.revision:4b8ae5880b13d1f5b0d066a4ca1575f6841ee080
    • org.opencontainers.image.version:1.0.0.91000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant