Skip to content

ray-1.0.0

Compare
Choose a tag to compare
@barakmich barakmich released this 30 Sep 15:15

Ray 1.0

We're happy to announce the release of Ray 1.0, an important step towards the goal of providing a universal API for distributed computing.

To learn more about Ray 1.0, check out our blog post and whitepaper.

Ray Core

Autoscaler

Dashboard & Metrics

RLlib

  • Two Model-based RL algorithms were added: MB-MPO (“Model-based meta-policy optimization”) and “Dreamer”. Both algos were benchmarked and are performing comparably to the respective papers’ reported results.
  • A “Curiosity” (intrinsic motivation) module was added via RLlib’s Exploration API and benchmarked on a sparse-reward Unity3D environment (Pyramids).
  • Added documentation for the Distributed Execution API.
  • Removed (already soft-deprecated) APIs: Model(V1) class, Trainer config keys, some methods/functions. Where you would see a warning previously when using these, there will be an error thrown now.
  • Added DeepMind Control Suite examples.

Tune

Breaking changes:

  • Multiple tune.run parameters have been deprecated: ray_auto_init, run_errored_only, global_checkpoint_period, with_server (#10518)
  • tune.run(upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint have been moved to tune.SyncConfig [docs] (#10518)

New APIs:

  • mode, metric, time_budget parameters for tune.run (#10627, #10642)
  • Search Algorithms now share a uniform API: (#10621, #10444). You can also use the new create_scheduler/create_searcher shim layer to create search algorithms/schedulers via string, reducing boilerplate code (#10456).
  • Native callbacks for: MXNet, Horovod, Keras, XGBoost, PytorchLightning (#10533, #10304, #10509, #10502, #10220)
  • PBT runs can be replayed with PopulationBasedTrainingReplay scheduler (#9953)
  • Search Algorithms are saved/resumed automatically (#9972)
  • New Optuna Search Algorithm docs (#10044)
  • Tune now can sync checkpoints across Kubernetes pods (#10097)
  • Failed trials can be rerun with tune.run(resume="run_errored_only") (#10060)

Other Changes:

RaySGD:

  • Creator functions are subsumed by the TrainingOperator API (#10321)
  • Training happens on actors by default (#10539)

Serve

Thanks

We thank all the contributors for their contribution to this release!

@MissiontoMars, @ijrsvt, @desktable, @kfstorm, @lixin-wei, @Yard1, @chaokunyang, @justinkterry, @pxc, @ericl, @WangTaoTheTonic, @carlos-aguayo, @sven1977, @gabrieleoliaro, @alanwguo, @aryairani, @kishansagathiya, @barakmich, @rkube, @SongGuyang, @qicosmos, @ffbin, @PidgeyBE, @sumanthratna, @yushan111, @juliusfrost, @edoakes, @mehrdadn, @Basasuya, @icaropires, @michaelzhiluo, @fyrestone, @robertnishihara, @yncxcw, @oliverhu, @yiranwang52, @ChuaCheowHuan, @raphaelavalos, @suquark, @krfricke, @pcmoritz, @stephanie-wang, @hekaisheng, @zhijunfu, @Vysybyl, @wuisawesome, @sanderland, @richardliaw, @simon-mo, @janblumenkamp, @zhuohan123, @AmeerHajAli, @iamhatesz, @mfitton, @noahshpak, @maximsmol, @weepingwillowben, @raulchen, @09wakharet, @ashione, @henktillman, @architkulkarni, @rkooo567, @zhe-thoughts, @amogkam, @kisuke95, @clarkzinzow, @holli, @raoul-khour-ts