Skip to content
/ APSB Public

Always Synchronize All Workers for Asynchronous Parallel Scheme via Broadcast

Notifications You must be signed in to change notification settings

karnina/APSB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 

Repository files navigation

APSB: Always Synchronize All Workers for Asynchronous Parallel Scheme via Broadcast

Federated Learning (FL) has enabled multiple mobile devices to collaboratively train a centralized machine learning model without sharing their private dataset. Most commonly used FL algorithms are synchronous(e.g. KSGD), where the server aggregates the updated models from all involved workers and returns the aggregated global model back to them via broadcast communication. However, such synchronous algorithms are also susceptible to performance degradation in heterogeneous environments. Although asynchronous algorithms(e.g. KASGD) break the limitation of heterogeneity, they can not benefit from the broadcast acceleration. Considering that broadcast is much more efficient than point-to-point in the edge learning scenario, we tried to apply broadcast to KASGD and proposed APSB.

Poster

image

Group Members

Yang Yutong, Chen Kailin, Tan Zhiren, Shi Hao

Course Info

Projects for National University of Singapore, School of Computing, CS5260 Neural Networks and Deep Learning II Module.

About

Always Synchronize All Workers for Asynchronous Parallel Scheme via Broadcast

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published