-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Present details on which tools support what #6
Comments
I think a simple table is fine. We already have a test function implemented that automatically compares the likelihood values. Passed for all but 0006 and 0007 ;-) Is it planned to also compare objective values after fitting? If yes, what standard specifications for the optimization do you have in mind? |
0006 should be possible to fix (#9). For 0007, it could be that for you chi2 means applied on the linear scale, while we give here values on the, e.g., log scale. |
We have not thought about that yet, but good point. Given the simplicity of the test models, one could actually do optimization. In particular, one could check gradients (easy to compute analytically), and if the log likelihoods at the found optimal parameters match, which would indicate that the respective tool is able to parameterize the model, but that would be a bit more involved. |
Yes, I agree. Checking gradients would be a nice extension! |
table implemented in https://github.com/PEtab-dev/PEtab/tree/develop#petab-features-supported-in-different-tools, to be filled by all tools. gradient discussion moved to #24. thus closing here. |
How to best represent? One could do a table with a row for each test case (with a short description), and then checking for each tool whether the test case passes. Other ideas?
The text was updated successfully, but these errors were encountered: