Releases: ray-project/xgboost_ray
Releases · ray-project/xgboost_ray
xgboost_ray-0.0.5
- Added distributed callbacks called before/after train/data loading (#71)
- Improved fault tolerance testing and benchmarking (#72)
- Placement group fixes (#74)
- Improved warnings/errors when using incompatible APIs (#76, #82, #84)
- Enhanced compatibility with XGBoost 0.90 (legacy) and XGBoost 1.4 (#85, #90)
- Better testing (#72, #87)
- Minor bug/API fixes (#78, #83, #89)
xgboost_ray-0.0.4
- Add GCS support (Petastorm) (#63)
- Enforce labels are set for train/evaluation data (#64)
- Re-factor data loading structure, making it easier to add or change data loading backends (#66)
- Distributed and locality-aware data loading for Modin dataframes (#67)
- Documentation cleanup (#68)
- Fix RayDeviceQuantileDMatrix usage (#69)
xgboost_ray-0.0.3
xgboost_ray-0.0.2
Fix compatibility with python 3.8
xgboost_ray-0.0.1
Initial version of xgboost on ray, featuring:
- Distributed training and predict support, tested on clusters of up to 600 nodes
- Fault tolerance: Restarting the whole run from latest checkpoint if a node fails
- Fault tolerance: Automatic scaling down/up when nodes die/become available again
- Data loading from various sources (CSV, Parquet, Modin dataframes, Ray MLDataset, pandas, numpy)
- Seamless integration with Ray Tune
- Initial Ray placement group support