Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

results when replicate the code following Onsets and Frames: Dual-Objective Piano Transcription #28

Open
Dream-High opened this issue Nov 25, 2021 · 3 comments

Comments

@Dream-High
Copy link

Hello, @jongwook, Thanks for your opening codes.
Recently, I use your code in order to get the performance in paper "Onsets and Frames: Dual-Objective Piano Transcription". I use the MAPS to train the model and the batch_size=4, iteration= 358000. When evaluating, I get the performance as following.
无标题
Some metrics appear to be quite low, especially the frame metrics which are 82.2/70.4/75.5 whereas the "Onsets and Frames: Dual-Objective Piano Transcription" paper reports 88.53/70.89/78.3

Do you know the reasons about that?
Thanks a lot

@xk-wang
Copy link

xk-wang commented Nov 26, 2021

the magenta team trained the onsets and frames model using maestro dataset instead of maps dataset.

@Dream-High
Copy link
Author

@xk-wang There are three papers using the onsets and frames model, while in the "Onsets and Frames: Dual-Objective Piano Transcription", the model is trained on MAPS datasets

@xk-wang
Copy link

xk-wang commented Apr 26, 2022

@Dream-High I also used this code, it is not completely the same with the original onsets and frames model. You should use the original Tensorflow version code and convert it to PyTorch yourself. I think this code just implements the main idea, but some details are missing comprared with the Tensorflow version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants