-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference after training the model #10
Comments
Yes sure, you can use our BMW Yolo inference cpu or gpu where u can put you trained model (read the README in those repo to have a clear view) |
Great to know! Thanks a lot! |
hi there i want to know how to make the detection of several images without using the api, just local ? or if is possible to do the detection of an folder with images using the BMW-YOLOv4-Inference-API-GPU? |
In the BMW-YOLOv4-Inference-API-GPU we have an endpoint called here is an example to call this endpoint using python:
the response should be like this
another method is to send each image in one request inside a loop |
Are there any ways to do inference/predictions using the latest weight after the model is trained?
I am able to do predictions during the training process using the Custom API at port 8099. However, the port is also closed after the training is finished.
Thanks!
The text was updated successfully, but these errors were encountered: