-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Barycentric subdivision #13
Conversation
… on data's simplicial complexes. Then, another transformation can get the updated triangulation, to avoid overhead with previous models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very good. Just some minor questions
code/experiments/test.py
Outdated
) | ||
|
||
# add benchmarking results | ||
results.add(data=out[0], config=config) | ||
results.add(data=out[0][0], config=config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this still work for every configuration? Why is the result array now two-dimensional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just saw the edit in run_experiment.py. I see now that we add a result for every barycentric subdivision. Shouldn't the line be something like results.add(data=out[idx][0], config=config)
where idx
is the index of the barycentric subdivision then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! True! Good catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, it looks better now in 25ff2a0
One more thought:
How is it intended to evaluate the results of the barycentric subdivisions? I think we need to add information about that an added benchmark corresponds to its relevant subdivision configuration in the ResultCollection.
Since the subdivision configuration it is not included in the ConfigExperimentRun, I think it can be done by adding another argument to the add
method in the ResultCollection class (needs to also be handled in the save_result
method).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OMW.
We also need to merge the new commits in main to this branch |
The pull requests allows one to test the trained neural networks on the barycentric subdivisions of the original test dataset specifying a maximum number of barycentric subdivisions to perform.