Skip to content

Conversation

@sammychien
Copy link

No description provided.

@sammychien
Copy link
Author

/evaluate

@github-actions
Copy link

Starting evaluation! Check the Actions tab for progress, or wait for a comment with the results.

@github-actions
Copy link

Evaluation results

metric stat baseline pr15
gpt_groundedness pass_rate 1.0 1.0
mean_rating 5.0 5.0
gpt_relevance pass_rate 1.0 1.0
mean_rating 4.95 4.95
f1_score mean 0.43 0.43
answer_length mean 609.1 612.35
latency mean 2.67 2.17
citations_matched mean 0.63 0.63

Check the workflow run for more details.

@sammychien sammychien changed the title update temp to 333 update temp to .333 Oct 29, 2024
@pamelafox
Copy link
Owner

Thanks for coming and PRing!

For your reference, here are my slides:
https://speakerdeck.com/pamelafox/github-universe-evaluating-rag-apps-in-github-actions

And the main repo this was based on:
https://github.com/pamelafox/rag-postgres-openai-python/
with evaluation guide here:
https://github.com/Azure-Samples/rag-postgres-openai-python/blob/main/docs/evaluation.md

To evaluate on Azure,
Azure AI Eval SDK docs are here:
https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/evaluate-sdk
And if you're interested in the Azure AI CI/CD private preview, sign up here:
https://aka.ms/genAI-CI-CD-private-preview

@pamelafox pamelafox closed this Oct 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants