HierarchicalKV v0.1.0-beta.1
Pre-release
Pre-release
Release Notes
This is the first release of HierarchicalKV!
Features
- Supports training large RecSys models on HBM and host memory at the same time.
- Provides better performance by full bypassing CPUs and reducing the communication workload.
- Implements table-size restraint strategies that are based on LRU or customized strategies. The strategies are implemented by CUDA kernels.
- Operates at a high working-status load factor that is close to 1.0.
Thank You to All Our Contributors
- Fan Yu
- EmmaQiao
- Gems Guo
- Haidong Rong
- LiFan
- Matthias Langer
- Michael McKiernan
- Ranjeet
- 王泽寰
- Zhangyafei
Acknowledgment
We are very grateful to external initial contributors @Zhangyafei and @Lifan for their design, coding, and review work.