Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

write evaluation scripts for comparing reference annotations and model annotations #71

Open
2 of 8 tasks
mnaydan opened this issue Aug 15, 2024 · 0 comments
Open
2 of 8 tasks

Comments

@mnaydan
Copy link
Collaborator

mnaydan commented Aug 15, 2024

  • Convert @mnaydan's annotations to the desired annotation format (mainly determine corresponding span indices).
  • Get passim results for test set with default parameters
    • "Raw" passim output (i.e. various files within a specific (top-level) output directory)
    • Standardized passim output
  • Write script(s) for computing evaluation metrics for passim results (against @mnaydan's annotations)
  • Create a document that reports on results
    • Include original/aligned excerpts for visual inspection
    • Include scores for evaluation metrics
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant