Release v0.2.3
We are thrilled to announce the release of GraphLearn for PyTorch v0.2.3. This update includes some enhancements focusing on:
- Distributed support for vineyard as an integration with GraphScope.
- Optimizations such as graph caching, and some experimental features including support for bf16 precision and all-to-all communication.
- Some bug fixes.
What's Changed
- IGBH: Add Dockerfile and some minors by @LiSu in #128
- IGBH: adjust mllogger tag position by @LiSu in #130
- Distributed support for v6d GraphScope by @husimplicity in #116
- add graph caching support for distributed training by @kaixuanliu in #132
- add bf16 precision support to utilize Intel's AMX accelerations by @kaixuanliu in #133
- fix: data path for s/c case by @husimplicity in #135
- add all2all support to replace p2p rpc, using gloo as backend temporarily by @kaixuanliu in #134
- Fix table dataset init graph by @husimplicity in #136
- Verify hops as dict with 0 by @husimplicity in #138
- fix: several bugs on distributed mode by @husimplicity in #139
- [fix] explicitly call share_memory_ in Feature in cpu mode by @Zhanghyi in #142
- upgrade pytorch and cuda versions by @Zhanghyi in #141
Full Changelog: v0.2.2...v0.2.3