Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt algo to 910B #156

Merged
merged 1 commit into from
Dec 11, 2023
Merged

Adapt algo to 910B #156

merged 1 commit into from
Dec 11, 2023

Conversation

MashiroChen
Copy link
Collaborator

@MashiroChen MashiroChen commented Dec 11, 2023

  1. 减少maddpg不必要控制流,如将learner部分从actor的循环中移出,删除if done,更新间隔为1的softupdate。性能提升明显从6.6ms/step -> 4.8ms/step
  2. PPO算法修改环境并行线程数,ARM下30个线程性能较差更改为5。GE下kernel by kernel性能差于子图下沉端到端,修改为子图下沉执行。两项优化性能提升明显。环境线程修改性能从8.4ms/step->7.2ms/step
  3. MAPPO适配910B

@WilfChen WilfChen merged commit 8598546 into mindspore-lab:master Dec 11, 2023
0 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants