You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I set my serving runtime to have a minimum replicas of 3. But my inference service will not be scheduled in the third serving runtime server. The doc is vague and as far as I know modelmesh only scales the serving runtimes but it would not matter if the inference services or the models are not scheduled.
To Reproduce
Steps to reproduce the behavior:
Scale 3 serving runtimes.
Check the third serving runtime to not have any model scheduled in.
Expected behavior
The third serving runtime actually has some models scheduled in.
By the way, I have some questions:
What is the proper way to scale inference services (models) ?
As far as I know HPA only supports cpu and memory metrics, is there any other metrics supported ? I would like the CCU metrics in particular.
The text was updated successfully, but these errors were encountered:
Describe the bug
I set my serving runtime to have a minimum replicas of 3. But my inference service will not be scheduled in the third serving runtime server. The doc is vague and as far as I know modelmesh only scales the serving runtimes but it would not matter if the inference services or the models are not scheduled.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The third serving runtime actually has some models scheduled in.
By the way, I have some questions:
The text was updated successfully, but these errors were encountered: