Release v0.2.0
Summarization
The improvements included in this release (FederatedScope v0.2.0) are summarized as follows:
- FederatedScope allows users to apply asynchronous training strategies in federated learning with event-driven architecture, including different aggregation conditions, staleness toleration, broadcasting manners, etc. And we support an efficient standalone simulation for cross-device FL with a large number of participants.
- We add three benchmarks for Federated HPO, Personalized FL, and Hetero-Task FL to promote the application of federated learning in a wide range of scenarios.
- We ease the installation, setup, and continuous integration (CI), and make them more friendly for users to get started and customize. And useful visualization functionalities are added into FederatedScope for users to monitor the training process and evaluation results.
- We add paper lists of related topics, including FL-Recommendation, Federated-HPO, Personalized FL, Federated Graph Learning, FL-NLP, FL-Attacker, FL-Incentive-Mechanism, and so on. These materials are constantly being updated.
- Several novel features are also included in this release, such as performance attacks, organizer, unseen clients generalization, splitter, client sampler, and so on, which enhance FederatedScope's robustness and comprehensiveness.
Commits
π Enhancements & Features
- Add backdoor attack @Alan-Qin (#267)
- Add organizer to FederatedScope @rayrayraykk (#265, #257)
- Monitoring the client-wise and global wandb info @yxdyc (#260, #226, #206, #176, #90)
- More friendly guidance of installation, setup and contribution @rayrayraykk (#255, #192)
- Add learning rate scheduler in FS @DavdGao (#248)
- Support different types of keys when communicating via grpc @xieyxclack (#239)
- Support constructing FL course when server does not have data @xieyxclack (#236)
- Enabled unseen clients case to check the participation generalization gap @yxdyc (#238, #100)
- Support more robust type conversion in yaml file @yxdyc (#229)
- Asynchronous Federated Learning @xieyxclack (#225)
- Support both pre- and post-merging data for the "global" baseline @yxdyc (#220)
- Format the code by flake8 @rayrayraykk (#211, #207)
- Add paper list of FL-Attacker and FL-Incentive-Mechanism @Osier-Yi (#203, #202, #201)
- Add client samplers @xieyxclack (#200)
- Modify the log for hooks_in_train/test @DavdGao (#181)
- Modification of the finetune mechanism @DavdGao (#177)
- Add FedHPO-B, a benchmark suite for federated hyperparameter optimization @rayrayraykk @joneswong (#173, #146, #127)
- Add pFL-Bench, a comprehensive benchmark for personalized Federated Learning @yxdyc (#169, #149)
- Add B-FHTL, a benchmark suite for studying federated hetero-task learning @DavdGao (#167, #150)
- Update splitter for consistent label distribution @xieyxclack (#154)
- Improve SHA wrapper @joneswong (#145)
- Add slack & DingDing group @xieyxclack (#142)
- Add FedEx @joneswong @rayrayraykk (#141, #137, #120)
- Enable single thread HPO @joneswong (#140)
- Refactor autotune module @joneswong (#133)
- Add paper list of federated database @DavdGao (#129)
- A quadratic objective function-based experiment @joneswong (#111)
- Support optimizers with different parameters @DavdGao (#96)
- Demo how to use SMAC for FedHPO @joneswong (#88)
- FLIT for federated graph classification/regression @wanghh7 (#87)
- Add momentum for the optimizer in server @DavdGao (#86)
- Add an example for distributed mode @xieyxclack (#85)
- Add readme for vFL @xieyxclack (#83)
- Add paper list of FL-NLP @cheneydon (#81)
- Add more models and datasets from external packages. @rayrayraykk (#79, #42)
- Add pFL paper list @yxdyc (#73, #72)
- Add paper list of FedRec @xieyxclack (#68)
- Add paper list of FedHPO @joneswong (#67)
- Add paper list of federated graph learning. @rayrayraykk (#65)
π Bug Fixes
- Fix ditto trainer @yxdyc (#271)
- Fix personalization when module has lazy load hooks @rayrayraykk (#269)
- Fix the wrongly early_stopper.track_and_check calling in client @yxdyc (#237)
- Fix type conversion error and invalid logging in distributed mode @rayrayraykk (#232, #223)
- Fix the cpu and memory wastage problems caused by multiprocess @yxdyc (#212)
- Fix for invalid sample_client_num in some situation @yxdyc (#210)
- Fix the url of GFL dataset @rayrayraykk (#196)
- Fix twitter dataset @rayrayraykk (#187)
- BugFix for monitor and logger @rayrayraykk @rayrayraykk (#188, #175, #109)
- Fix download url @Osier-Yi @rayrayraykk @xieyxclack (#101, #95, #92, #76)