You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I understand the input image is in BGR color space, with above mentioned biases.
After conversions when I read the model description with coremltools:
Greetings,
This may be off topic, but I'm just trying to find some help.
I'm trying to get 300 FCN model to run in my xCode project. I'm converting .coffemodel to .mlmodel with coremltools:
coreml_model = coremltools.converters.caffe.convert(caffe_model, image_input_names='data', is_bgr = True, red_bias = -104, blue_bias = -123, green_bias = -117, image_scale = 1)
As far as I understand the input image is in BGR color space, with above mentioned biases.
After conversions when I read the model description with coremltools:
input { name: "data" type { imageType { width: 300 height: 300 colorSpace: BGR } } } output { name: "score" type { multiArrayType { dataType: DOUBLE } } } metadata { userDefined { key: "coremltoolsVersion" value: "3.3" } }
The output has no shapes.
When I add the model to Xcode project, I run the model by passing CVPixelBuffer as input
let input = buffer(from: userSelectedImage_UI) guard let prediction = try? model.prediction(data: input!) else { return }
the output of the model is MultyArray.
let output = prediction.score
How can I convert it to CVPixelBuffer, if there're no shapes.
I've tried using MultiArray converters to no avail, the output is just black image
I've tried this and this methods.
If anybody knows how to get this working in CoreML I'd really appreciate it
The text was updated successfully, but these errors were encountered: