You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hm, good idea for the checks, especially for costly metrics. We would also want the LLM as judge to stop if the open ai key is not provided for example.
Issue encountered
When passing an unsupported metric (e.g. single-token metric) to
InferenceEndpointModel
, an error is raised fromPipeline.evaluate
:This happens once the endpoint has been created and the model has been created and loaded.
I wonder if the error could be raised earlier for optimization reasons: the process fails faster and resources are not wasted.
Solution/Feature
Raise the error before the endpoint has been created.
Possible alternatives
Leave it as it is.
The text was updated successfully, but these errors were encountered: