Note
Feedback and pull requests for this documentation are very welcome.
final vision = UnifiedAppleVision();
vision.executionPriority = VisionExecutionPriority.veryHigh;
vision.analyzeMode = VisionAnalyzeMode.still;
The priority and mode of analysis processing can be set.
Note
There are two modes of analysis processing.
Mode | Description |
---|---|
.still |
Suitable for analyzing still images one by one. |
.sequential |
It is suitable for analyzing a series of images, such as a video. The results of the analysis of the previous image are used for the next analysis. Suitable for object tracking, etc. |
You can analyze images by calling the analyze
method.
// create input image
final input = VisionInputImage(
bytes: image.bytes,
size: image.size,
);
// analyze
vision.analyze(
image: input,
requests: [
// add requests you wish to perform
VisionRecognizeTextRequest(
onResult: (result) {
final observations = result.ofRecognizeTextRequest; // get casted results
// some action
},
onError: (error) {
// handle error
},
),
VisionDetectTextRectanglesRequest(
onResult: (result) {
final observations = result.ofDetectTextRectanglesRequest;
// some action
},
),
],
);
For example, if you wish to perform text recognition, add VisionRecognizeTextRequest()
.