Skip to content
This repository has been archived by the owner on Sep 9, 2022. It is now read-only.

Each Stick run different model #30

Open
MaduJoe opened this issue Aug 5, 2019 · 1 comment
Open

Each Stick run different model #30

MaduJoe opened this issue Aug 5, 2019 · 1 comment

Comments

@MaduJoe
Copy link

MaduJoe commented Aug 5, 2019

[Required] Your device (RaspberryPi3, LaptopPC, or other device name):
RaspberryPi3 b+ , NCS2 x 4 ,
[Required] Your device's CPU architecture (armv7l, x86_64, or other architecture name):
armv7l
[Required] Your OS (Raspbian, Ubuntu1604, or other os name):
Rasbian
[Required] Details of the work you did before the problem occurred:

I have four NCS2 and I`m trying to run different models on each neural stick independently.

For example, first neural stick run face detection, next one run emotion recognition, and third one run image classificaiton ..
Is it possible ? I saw your MutiModel(FaceDetection, EmotionRecognition) which is merged
model but what I want to do is what I said above.


I`m really appreciate your project
Thanks



@DogukanAltay
Copy link

DogukanAltay commented Nov 1, 2019

From https://software.intel.com/en-us/articles/transitioning-from-intel-movidius-neural-compute-sdk-to-openvino-toolkit :

Multiple NCS Devices
The NCSDK provided an API to enumerate all NCS devices in the system and let the application programmer run inferences on specific devices. With the OpenVINO™ toolkit Inference Engine API, the library itself distributes inferences to the NCS devices based on device load so that logic does not need to be included in the the application.

The key points when creating an OpenVINO™ toolkit application for multiple devices using the Engine API are:

- The application in general doesn't need to be concerned with specific devices or managing the workloads for those devices.
- The application should create a single PlugIn instance using the device string "MYRIAD". This plugin instance handles all "MYRIAD" devices in the system for the application. The NCS and Intel® NCS 2 are both "MYRIAD" devices as they are both based on versions of the Intel® Movidius™ Myriad™ VPU.
- The application should create an ExecutableNetwork instance for each device in the host system for maximum performance. However, there is nothing in the API that ties an ExecutableNetwork to a particular device.
- Multiple Inference Requests can be created for each ExecutableNetwork. These requests can be processed by the device with a level of parallelization that best works with the target devices. For Intel® NCS 2 devices, four inference requests for each Executable Network are the optimum number to create if your application is sensitive to inference throughput.

@MaduJoe As mentioned above, you don't need to explicitly tell which model should run on which NCS2.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants