Skip to content

Release ray-1.2.0

Compare
Choose a tag to compare
@wuisawesome wuisawesome released this 13 Feb 01:42

Release v1.2.0 Notes

Highlights

Ray Autoscaler

🎉 New Features:

  • A new autoscaler output format in monitor.log (#12772, #13561)
  • Piping autoscaler events to driver logs (#13434)

💫Enhancements

🔨 Fixes:

RLLib

🎉 New Features:

  • Fast Attention Nets (using the trajectory view API) (#12753).
  • Attention Nets: Full PyTorch support (#12029).
  • Attention Nets: Support auto-wrapping around default- or custom models by specifying “use_attention=True” in the model’s config. * * * This works completely analogously now to “use_lstm=True”. (#11698)
  • New Offline RL Algorithm: CQL (based on SAC) (#13118).
  • MAML: Discrete actions support (added CartPole mass test case).
  • Support Atari framestacking via the trajectory view API (#13315).
  • Support for D4RL environments/benchmarks (#13550).
  • Preliminary work on JAX support (#13077, #13091).

💫 Enhancements:

  • Rollout lengths: Allow unit to be configured as “agent_steps” in multi-agent settings (default: “env_steps”) (#12420).
  • TFModelV2: Soft-deprecate register_variables and unify var names wrt TorchModelV2 (#13339, #13363).

📖 Documentation:

  • Added documentation on Model building API (#13260, #13261).
  • Added documentation for the trajectory view API. (#12718)
  • Added documentation for SlateQ (#13266).
  • Readme.md documentation for almost all algorithms in rllib/agents (#12943, #13035).
  • Type annotations for the “rllib/execution” folder (#12760, #13036).

🔨 Fixes:

Tune

🎉 New Features:

💫 Enhancements

  • Ray Tune now uses ray.cloudpickle underneath the hood, allowing you to checkpoint large models (>4GB) (#12958).
  • Using the 'reuse_actors' flag can now speed up training for general Trainable API usage. (#13549)
  • Ray Tune will now automatically buffer results from trainables, allowing you to use an arbitrary reporting frequency on your training functions. (#13236)
  • Ray Tune now has a variety of experiment stoppers (#12750)
  • Ray Tune now supports an integer loguniform search space distribution (#12994)
  • Ray Tune now has an initial support for the Ray placement group API. (#13370)
  • The Weights and Bias integration (WandbLogger) now also accepts wandb.data_types.Video (#13169)
  • The Hyperopt integration (HyperoptSearch) can now directly accept category variables instead of indices (#12715)
  • Ray Tune now supports experiment checkpointing when using grid search (#13357)

🔨Fixes and Updates

  • The Optuna integration was updated to support the 2.4.0 API while maintaining backwards compatibility (#13631)
  • All search algorithms now support points_to_evaluate (#12790, #12916)
  • PBT Transformers example was updated and improved (#13174, #13131)
  • The scikit-optimize integration was improved (#12970)
  • Various bug fixes (#13423, #12785, #13171, #12877, #13255, #13355)

SGD

🔨Fixes and Updates

  • Fix Docstring for as_trainable (#13173)
  • Fix process group timeout units (#12477)
  • Disable Elastic Training by default when using with Tune (#12927)

Serve

🎉 New Features:

  • Ray Serve backends now accept a Starlette request object instead of a Flask request object (#12852). This is a breaking change, so please read the migration guide.
  • Ray Serve backends now have the option of returning a Starlette Response object (#12811, #13328). This allows for more customizable responses, including responses with custom status codes.
  • [Experimental] The new Ray Serve MLflow plugin makes it easy to deploy your MLflow models on Ray Serve. It comes with a Python API and a command-line interface.
  • Using “ImportedBackend” you can now specify a backend based on a class that is installed in the Python environment that the workers will run in, even if the Python environment of the driver script (the one making the Serve API calls) doesn’t have it installed (#12923).

💫 Enhancements:

  • Dependency management using conda no longer requires the driver script to be running in an activated conda environment (#13269).
  • Ray ObjectRef can now be used as argument to serve_handle.remote(...). (#12592)
  • Backends are now shut down gracefully. You can set the graceful timeout in BackendConfig. (#13028)

📖 Documentation:

  • A tutorial page has been added for integrating Ray Serve with your existing FastAPI web server or with your existing AIOHTTP web server (#13127).
  • Documentation has been added for Ray Serve metrics (#13096).