Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Whisper callout #426

Merged
merged 1 commit into from
Oct 9, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions vod/generate-transcripts.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ api.video's AI-driven transcription feature can generate video transcripts using

Enable your audience to have seamless user experience regardless of their language or location, and also provide more inclusive and accessible content by inviting deaf or hard-of-hearing users!

<Callout pad="2" type="success">

api.video uses [Whisper](https://openai.com/index/whisper/) for multilingual speech recognition in videos. With data security in mind, we run Whisper's ASR models on our own infrastructure and do not expose data outside our service. You control who gets access to your videos and transcripts.
</Callout>

## How to generate transcripts

To enable transcription, set these **optional** parameters when you create a video object using a `POST` request to the [Create video object endpoint](/reference/api/Videos#create-a-video-object):
Expand Down
Loading