Update dependency torchrl to v0.6.0 #725
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.5.0
->==0.6.0
Release Notes
pytorch/rl (torchrl)
v0.6.0
: : compiled losses and partial stepsCompare Source
What's Changed
We introduce wrappers for ML-Agents and OpenSpiel. See the doc here for OpenSpiel and here for MLAgents.
We introduce support for [partial steps](#2377, #2381), allowing you to run rollouts that ends only when all envs are done without resetting those who have reached a termination point.
We add the capability of passing replay buffers directly to data collectors, to avoid inter-process synced communications - thereby drastically speeding up data collection. See the doc of the collectors for more info.
The GAIL algorithm has also been integrated in the library (#2273).
We ensure that all loss modules are compatible with torch.compile without graph breaks (for a typical built). Execution of compiled losses is usually in the range of 2x faster than its eager counterpart.
Finally, we have sadly decided not to support Gymnasium v1.0 and future releases as the new autoreset API is fundamentally incompatible with TorchRL. Furthermore, it does not guarantee the same level of reproducibility as previous releases. See this discussion for more information.
We provide wheels for aarch64 machines, but not being able to upload them to PyPI we provide them attached to these release notes.
Deprecations
New environments
OpenSpielWrapper
andOpenSpielEnv
(#2345) by @kurtamohlerNew features
group_map
support to MLAgents wrappers (#2491) by @kurtamohlerhold_out_net
#2499 by @vmoensNew Algorithms
Fixes
TD_GET_DEFAULTS_TO_NONE=1
in all CIs (#2363) by @vmoensMultiCategorical
support in PettingZoo action masks (#2485) Co-authored-by: Vincent Moens by @matteobettinireshape(-1)
for inputs toDreamerActorLoss
(#2496) by @kurtamohlerreshape(-1)
for inputs toobjectives
modules (#2494) Co-authored-by: Vincent Moens by @kurtamohlerPerformance
CatFrames.unfolding
withpadding="same"
(#2407) by @kurtamohlerPrioritizedSliceSampler._padded_indices
(#2433) by @kurtamohlerSliceSampler._tensor_slices_from_startend
(#2423) by @kurtamohlerDocumentation
knowledge_base
entry (#2383) by @depictigerNot user facing
advantages.py
(#2492) Co-authored-by: Louis Faury by @louisfauryNew Contributors
As always, we want to show how appreciative we are of the vibrant open-source community that keeps TorchRL alive.
Full Changelog: pytorch/rl@v0.5.0...v0.6.0
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.