Skip to content

chore(deps): update auto merged updates #887

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

platform-engineering-bot
Copy link
Collaborator

@platform-engineering-bot platform-engineering-bot commented Jun 30, 2025

This PR contains the following updates:

Package Update Change
pytest-container (source) patch ==0.4.3 -> ==0.4.4
timm patch ==1.0.15 -> ==1.0.16

Release Notes

dcermak/pytest_container (pytest-container)

v0.4.4

Compare Source

Breaking changes:

  • Remove intermediate dataclass runtime._OciRuntimeBase (gh#238 <https://github.com/dcermak/pytest_container/pull/238>_)

  • :py:class:~pytest_container.runtime.PodmanRuntime and
    :py:class:~pytest_container.runtime.DockerRuntime now raise exceptions if
    the runtime is not functional (gh#238 <https://github.com/dcermak/pytest_container/pull/238>_)

Improvements and new features:

Documentation:

Internal changes:

  • Drop poetry as the build system and fallback to setuptools
huggingface/pytorch-image-models (timm)

v1.0.16

Compare Source

June 26, 2025

  • MobileNetV5 backbone (w/ encoder only variant) for Gemma 3n image encoder
  • Version 1.0.16 released

June 23, 2025

  • Add F.grid_sample based 2D and factorized pos embed resize to NaFlexViT. Faster when lots of different sizes (based on example by https://github.com/stas-sl).
  • Further speed up patch embed resample by replacing vmap with matmul (based on snippet by https://github.com/stas-sl).
  • Add 3 initial native aspect NaFlexViT checkpoints created while testing, ImageNet-1k and 3 different pos embed configs w/ same hparams.
Model Top-1 Acc Top-5 Acc Params (M) Eval Seq Len
naflexvit_base_patch16_par_gap.e300_s576_in1k 83.67 96.45 86.63 576
naflexvit_base_patch16_parfac_gap.e300_s576_in1k 83.63 96.41 86.46 576
naflexvit_base_patch16_gap.e300_s576_in1k 83.50 96.46 86.63 576
  • Support gradient checkpointing for forward_intermediates and fix some checkpointing bugs. Thanks https://github.com/brianhou0208
  • Add 'corrected weight decay' (https://arxiv.org/abs/2506.02285) as option to AdamW (legacy), Adopt, Kron, Adafactor (BV), Lamb, LaProp, Lion, NadamW, RmsPropTF, SGDW optimizers
  • Switch PE (perception encoder) ViT models to use native timm weights instead of remapping on the fly
  • Fix cuda stream bug in prefetch loader

June 5, 2025

  • Initial NaFlexVit model code. NaFlexVit is a Vision Transformer with:
    1. Encapsulated embedding and position encoding in a single module
    2. Support for nn.Linear patch embedding on pre-patchified (dictionary) inputs
    3. Support for NaFlex variable aspect, variable resolution (SigLip-2: https://arxiv.org/abs/2502.14786)
    4. Support for FlexiViT variable patch size (https://arxiv.org/abs/2212.08013)
    5. Support for NaViT fractional/factorized position embedding (https://arxiv.org/abs/2307.06304)
  • Existing vit models in vision_transformer.py can be loaded into the NaFlexVit model by adding the use_naflex=True flag to create_model
    • Some native weights coming soon
  • A full NaFlex data pipeline is available that allows training / fine-tuning / evaluating with variable aspect / size images
    • To enable in train.py and validate.py add the --naflex-loader arg, must be used with a NaFlexVit
  • To evaluate an existing (classic) ViT loaded in NaFlexVit model w/ NaFlex data pipe:
    • python validate.py /imagenet --amp -j 8 --model vit_base_patch16_224 --model-kwargs use_naflex=True --naflex-loader --naflex-max-seq-len 256
  • The training has some extra args features worth noting
    • The --naflex-train-seq-lens' argument specifies which sequence lengths to randomly pick from per batch during training
    • The --naflex-max-seq-len argument sets the target sequence length for validation
    • Adding --model-kwargs enable_patch_interpolator=True --naflex-patch-sizes 12 16 24 will enable random patch size selection per-batch w/ interpolation
    • The --naflex-loss-scale arg changes loss scaling mode per batch relative to the batch size, timm NaFlex loading changes the batch size for each seq len

May 28, 2025

What's Changed

New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.15...v1.0.16


Configuration

📅 Schedule: Branch creation - Between 12:00 AM and 03:59 AM, only on Monday ( * 0-3 * * 1 ) (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

Signed-off-by: Platform Engineering Bot <[email protected]>
@platform-engineering-bot platform-engineering-bot force-pushed the renovate/auto-merged-updates branch from 0b0b38c to 648e65a Compare June 30, 2025 11:44
@platform-engineering-bot platform-engineering-bot changed the title chore(deps): update dependency timm to v1.0.16 chore(deps): update auto merged updates Jun 30, 2025
Copy link
Collaborator

@jeffmaury jeffmaury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants