diff --git a/bifrost/app/blog/blogs/ai-best-practices/src.mdx b/bifrost/app/blog/blogs/ai-best-practices/src.mdx index af5d14bc72..5cc74068ea 100644 --- a/bifrost/app/blog/blogs/ai-best-practices/src.mdx +++ b/bifrost/app/blog/blogs/ai-best-practices/src.mdx @@ -32,18 +32,21 @@ In the following section, we will go over the best practices when building with ## Best Practices -### 1. Define Key Performance Metrics +## 1. Define Key Performance Metrics To effectively monitor the performance of your AI app, it's crucial to define key performance metrics (KPIs) that align with your goals. -You can use observability tools to track and visualize these essential metrics such as latency, usage and costs, to make sure the models you use in your AI application run optimally. Here are some key metrics to focus on: +### Key Metrics - **Latency**: Measure the time taken for the model to generate a response. - **Throughput**: Track the number of requests handled by the model per second. - **Accuracy**: Evaluate the correctness of the model's predictions. - **Error Rate**: Track the frequency of errors or failures in model predictions. -**Video: Helicone's pre-built dashboard metrics and the ability to segment data.** + +### Segmentating Data on Helicone's Dashboard + +**Tip:** Look for a solution that provides a real-time dashboard to monitor key metrics and is **capable of handling large data volumes**.