Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependency torchrl to v0.6.0 #725

Closed
wants to merge 1 commit into from
Closed

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Oct 23, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
torchrl ==0.5.0 -> ==0.6.0 age adoption passing confidence

Release Notes

pytorch/rl (torchrl)

v0.6.0: : compiled losses and partial steps

Compare Source

What's Changed

We introduce wrappers for ML-Agents and OpenSpiel. See the doc here for OpenSpiel and here for MLAgents.

We introduce support for [partial steps](#​2377, #​2381), allowing you to run rollouts that ends only when all envs are done without resetting those who have reached a termination point.

We add the capability of passing replay buffers directly to data collectors, to avoid inter-process synced communications - thereby drastically speeding up data collection. See the doc of the collectors for more info.

The GAIL algorithm has also been integrated in the library (#​2273).

We ensure that all loss modules are compatible with torch.compile without graph breaks (for a typical built). Execution of compiled losses is usually in the range of 2x faster than its eager counterpart.

Finally, we have sadly decided not to support Gymnasium v1.0 and future releases as the new autoreset API is fundamentally incompatible with TorchRL. Furthermore, it does not guarantee the same level of reproducibility as previous releases. See this discussion for more information.

We provide wheels for aarch64 machines, but not being able to upload them to PyPI we provide them attached to these release notes.

Deprecations
New environments
New features
New Algorithms
Fixes
Performance
Documentation
Not user facing

New Contributors

As always, we want to show how appreciative we are of the vibrant open-source community that keeps TorchRL alive.

Full Changelog: pytorch/rl@v0.5.0...v0.6.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@MaximilienLC MaximilienLC deleted the renovate/torchrl-0.x branch October 23, 2024 15:07
Copy link
Contributor Author

renovate bot commented Oct 23, 2024

Renovate Ignore Notification

Because you closed this PR without merging, Renovate will ignore this update (==0.6.0). You will get a PR once a newer version is released. To ignore this dependency forever, add it to the ignoreDeps array of your Renovate config.

If you accidentally closed this PR, or if you changed your mind: rename this PR to get a fresh replacement PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant