Replies: 1 comment 2 replies
-
You can list multiple file names as the first argument(s) on the whisper command line, or use wildcarding (eg. *.mp3). whisper will load the model once and iterate through the set of files. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a CUDA card with 12 GB of RAM; the large-v3 model takes 10 GB. With the command line command, whisper reloads the model in full every time you transcribe even a small file. If I use Python, the behavior is much more efficient: the model can be loaded once and pinned to the GPU. I need to achieve that with the command line version. How does it get done?
Beta Was this translation helpful? Give feedback.
All reactions