-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation scripts #78
Comments
Hi, the evaluation script is incompatible with this GUI version, which has gone through code refactoring and abandoned many interface for evaluation. The evaluation mainly involves loading different datasets like SPIN and NVOS, while the metric (IoU) calculation is relatively easy. We may re-write it in the future. |
Thanks! |
Sorry to ask again, which reference views and target views are used for evaluation on the SPIn-NeRF dataset? What was the basis for the selection? |
Hi, the reference view is set to the first frame of the sorted views. However, the method is robust to reference view selection since the segmentation target is relatively simple. |
Thank you very much for your answer. May I ask if the target views used for evaluation are all views except the first frame? And the IoU of each scene is the average IoU of these target views? |
No, the target views include the reference view generally. Since though the reference view has gt mask for reference, the segmentation cannot ensure the final result align with the initial 2D mask. It is still meaningful to check whether the reference view is segmented properly. The IoU score is calculated across all views, not (IoU_1+...+IoU_N) / N. There is a little difference since |
Thank you very much for your answer! |
Can evaluation scripts be provided on different datasets to validate the quantitative results provided in the paper?
The text was updated successfully, but these errors were encountered: