Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Bump timm from 1.0.12 to 1.0.13 (#1521)
Bumps [timm](https://github.com/huggingface/pytorch-image-models) from 1.0.12 to 1.0.13. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/pytorch-image-models/releases">timm's releases</a>.</em></p> <blockquote> <h2>Release v1.0.13</h2> <h2>Jan 9, 2025</h2> <ul> <li>Add support to train and validate in pure <code>bfloat16</code> or <code>float16</code></li> <li><code>wandb</code> project name arg added by <a href="https://github.com/caojiaolong">https://github.com/caojiaolong</a>, use arg.experiment for name</li> <li>Fix old issue w/ checkpoint saving not working on filesystem w/o hard-link support (e.g. FUSE fs mounts)</li> <li>1.0.13 release</li> </ul> <h2>Jan 6, 2025</h2> <ul> <li>Add <code>torch.utils.checkpoint.checkpoint()</code> wrapper in <code>timm.models</code> that defaults <code>use_reentrant=False</code>, unless <code>TIMM_REENTRANT_CKPT=1</code> is set in env.</li> </ul> <h2>Dec 31, 2024</h2> <ul> <li><code>convnext_nano</code> 384x384 ImageNet-12k pretrain & fine-tune. <a href="https://huggingface.co/models?search=convnext_nano%20r384">https://huggingface.co/models?search=convnext_nano%20r384</a></li> <li>Add AIM-v2 encoders from <a href="https://github.com/apple/ml-aim">https://github.com/apple/ml-aim</a>, see on Hub: <a href="https://huggingface.co/models?search=timm%20aimv2">https://huggingface.co/models?search=timm%20aimv2</a></li> <li>Add PaliGemma2 encoders from <a href="https://github.com/google-research/big_vision">https://github.com/google-research/big_vision</a> to existing PaliGemma, see on Hub: <a href="https://huggingface.co/models?search=timm%20pali2">https://huggingface.co/models?search=timm%20pali2</a></li> <li>Add missing L/14 DFN2B 39B CLIP ViT, <code>vit_large_patch14_clip_224.dfn2b_s39b</code></li> <li>Fix existing <code>RmsNorm</code> layer & fn to match standard formulation, use PT 2.5 impl when possible. Move old impl to <code>SimpleNorm</code> layer, it's LN w/o centering or bias. There were only two <code>timm</code> models using it, and they have been updated.</li> <li>Allow override of <code>cache_dir</code> arg for model creation</li> <li>Pass through <code>trust_remote_code</code> for HF datasets wrapper</li> <li><code>inception_next_atto</code> model added by creator</li> <li>Adan optimizer caution, and Lamb decoupled weighgt decay options</li> <li>Some feature_info metadata fixed by <a href="https://github.com/brianhou0208">https://github.com/brianhou0208</a></li> <li>All OpenCLIP and JAX (CLIP, SigLIP, Pali, etc) model weights that used load time remapping were given their own HF Hub instances so that they work with <code>hf-hub:</code> based loading, and thus will work with new Transformers <code>TimmWrapperModel</code></li> </ul> <h2>What's Changed</h2> <ul> <li>Punch cache_dir through model factory / builder / pretrain helpers by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2356">huggingface/pytorch-image-models#2356</a></li> <li>Yuweihao inception next atto merge by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2360">huggingface/pytorch-image-models#2360</a></li> <li>Dataset trust remote tweaks by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2361">huggingface/pytorch-image-models#2361</a></li> <li>Add --dataset-trust-remote-code to the train.py and validate.py scripts by <a href="https://github.com/grodino"><code>@grodino</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2328">huggingface/pytorch-image-models#2328</a></li> <li>Fix feature_info.reduction by <a href="https://github.com/brianhou0208"><code>@brianhou0208</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2369">huggingface/pytorch-image-models#2369</a></li> <li>Add caution to Adan. Add decouple decay option to LAMB. by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2357">huggingface/pytorch-image-models#2357</a></li> <li>Switching to timm specific weight instances for open_clip image encoders by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2376">huggingface/pytorch-image-models#2376</a></li> <li>Fix broken image link in <code>Quickstart</code> doc by <a href="https://github.com/ariG23498"><code>@ariG23498</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2381">huggingface/pytorch-image-models#2381</a></li> <li>Supporting aimv2 encoders by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2379">huggingface/pytorch-image-models#2379</a></li> <li>fix: minor typos in markdowns by <a href="https://github.com/ruidazeng"><code>@ruidazeng</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2382">huggingface/pytorch-image-models#2382</a></li> <li>Add 384x384 in12k pretrain and finetune for convnext_nano by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2384">huggingface/pytorch-image-models#2384</a></li> <li>Fixed unfused attn2d scale by <a href="https://github.com/laclouis5"><code>@laclouis5</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2387">huggingface/pytorch-image-models#2387</a></li> <li>Fix MQA V2 by <a href="https://github.com/laclouis5"><code>@laclouis5</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2388">huggingface/pytorch-image-models#2388</a></li> <li>Wrap torch checkpoint() fn to default use_reentrant flag to False and allow env var override by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2394">huggingface/pytorch-image-models#2394</a></li> <li>Add half-precision (bfloat16, float16) support to train & validate scripts by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2397">huggingface/pytorch-image-models#2397</a></li> <li>Merging wandb project name chages w/ addition by <a href="https://github.com/rwightman"><code>@rwightman</code></a> in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2398">huggingface/pytorch-image-models#2398</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/brianhou0208"><code>@brianhou0208</code></a> made their first contribution in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2369">huggingface/pytorch-image-models#2369</a></li> <li><a href="https://github.com/ariG23498"><code>@ariG23498</code></a> made their first contribution in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2381">huggingface/pytorch-image-models#2381</a></li> <li><a href="https://github.com/ruidazeng"><code>@ruidazeng</code></a> made their first contribution in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2382">huggingface/pytorch-image-models#2382</a></li> <li><a href="https://github.com/laclouis5"><code>@laclouis5</code></a> made their first contribution in <a href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2387">huggingface/pytorch-image-models#2387</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/huggingface/pytorch-image-models/compare/v1.0.12...v1.0.13">https://github.com/huggingface/pytorch-image-models/compare/v1.0.12...v1.0.13</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/47811bc05a2fdff2dedbbb8b8b3a4b9e8dba4bb3"><code>47811bc</code></a> Update README, bump version to 1.0.13 non-dev</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/eeee38e9728c7f751ecfe2f20e2b55cb78b32015"><code>eeee38e</code></a> Avoid unecessary compat break btw train script and nearby timm versions w/ dt...</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/deb9895600ae9c1e169be1ce254dac9fb4d7eca5"><code>deb9895</code></a> Update checkpoint save to fix old hard-link + fuse issue I ran into again... ...</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/c4fb98f399585fe8f93dced592a9b84f04d14a0d"><code>c4fb98f</code></a> Merge pull request <a href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2398">#2398</a> from huggingface/caojiaolong-main</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/c173886e756c1e62afa1202b612b498c652799f1"><code>c173886</code></a> Merge branch 'main' into caojiaolong-main</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/2d0ac6f56720d3c59b4c10f63e8b0d805053e4d5"><code>2d0ac6f</code></a> Merge pull request <a href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2397">#2397</a> from huggingface/half_prec_trainval</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/1969528296a1724a761581c1a0ba39a334383fb6"><code>1969528</code></a> Fix dtype log when default (None) is used w/o AMP</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/92f610c9823aee905d950d728f0c86f833881c4b"><code>92f610c</code></a> Add half-precision (bfloat16, float16) support to train & validate scripts. S...</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/40c19f3939426aa388f0955229efdb75442b0f83"><code>40c19f3</code></a> Add wandb project name argument and allow change wandb run name</li> <li><a href="https://github.com/huggingface/pytorch-image-models/commit/6f80214e80812cd9725f3960ff8e31d6ed02da90"><code>6f80214</code></a> Merge pull request <a href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2394">#2394</a> from huggingface/non_reentrant_ckpt</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/pytorch-image-models/compare/v1.0.12...v1.0.13">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=timm&package-manager=pip&previous-version=1.0.12&new-version=1.0.13)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
- Loading branch information