Get started with OpenVINO™ Test Drive, an application that allows you to run generative AI and vision models trained by Intel® Geti™ directly on your computer or edge device using OpenVINO™ Runtime.
With use of OpenVINO™ Test Drive you can:
- Chat with LLMs and evaluating model performance on your computer or edge device
- Experiment with different text prompts to generate images using Stable Diffusion and Stable DiffusionXL models
- Transcribe speech from video using Whisper models, including generation of timestamps
- Run and visualize results of models trained by Intel® Geti™ using single image inference or batch inference mode
Download the latest release from the Releases repository.
Note
To verify downloaded file integrity, you can generate a SHA-256 of the downloaded file and compare it to the SHA-256 from corresponding .sha256
file published in Releases repository.
Installation on Windows
- Downloading the zip archive Releases repository
Windows
folder .
- Extract zip archive double-click the MSIX installation package, click
Install
button and it will display the installation process
- Click on the application name on Windows app list to launch OpenVINO™ Test Drive.
Upon starting the application, you can import a model using either Hugging Face for LLMs or upload Intel® Geti™ models from local disk.
- Choose a model from predefined set of popular models or pick one from Hugging Face using
Import model
->Hugging Face
and import it.
- Pick imported LLM from
My models
section and chat with it usingPlayground
tab. You can export LLM viaExport model
button.
- Use
Performance metrics
tab to get LLM performance metrics on your computer.
- Try Whisper for video transcription.
- Pick imported speech-to-text LLM from
My models
section and upload video for transcription. It is also possible to search words in transcript or download it.
- Use
Performance metrics
tab to get LLM performance metrics on your computer.
-
Choose an image generation LLM from predefined set of popular models or pick one from Hugging Face using
Import model
->Hugging Face
and import it. -
Pick imported LLM from
My models
section and chat with it to generate image. It is also possible to download generated image.
- Use
Performance metrics
tab to get LLM performance metrics on your computer.
You can export LLM via Export model
button.
- Download code deployment for the model in OpenVINO format trained by Intel® Geti™.
Note
Please check Intel® Geti™ documentation for more details.
- Import deployment code into OpenVINO™ Test Drive using
Import model
->Local disk
button.
- Run and visualize results of inference on individual images using
Live inference
tab.
- For batch inference, use
Batch inference
tab, provide paths to folder with input images in aSource folder
and specifyDestination folder
for output batch inference results. Click onStart
to start batch inference.
The application requires the flutter SDK and the dependencies for your specific platform to be installed.
Secondly, the bindings and its dependencies for your platform to be added to ./bindings
.
- Install flutter sdk. Make sure to follow the guide for flutter dependencies.
- Build the bindings and put them to
./bindings
folder. OpenVINO™ Test Drive uses bindings to OpenVINO™ GenAI and OpenVINO™ Vision ModelAPI located in./openvino_bindings
folder. See readme for more details. - Once done you can start the application:
flutter run
- OpenVINO™ - software toolkit for optimizing and deploying deep learning models.
- GenAI Repository and OpenVINO Tokenizers - resources and tools for developing and optimizing Generative AI applications.
- Intel® Geti™ - software for building computer vision models.
- OpenVINO™ Vision ModelAPI - a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures.
For those who would like to contribute to the OpenVINO™ Test Drive, please check out Contribution Guidelines for more details.
OpenVINO™ Test Drive repository is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
FFmpeg is an open source project licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. Intel is not responsible for obtaining any such licenses, nor liable for any licensing fees due, in connection with your use of FFmpeg.