Skip to content

Conversation

radu-matei
Copy link
Member

No description provided.

Signed-off-by: Radu Matei <[email protected]>
@karthik2804
Copy link
Contributor

is there a reason we want to check in the .spin folder?

Signed-off-by: Radu Matei <[email protected]>
@radu-matei
Copy link
Member Author

Removed and added it to gitignore.

@flynnduism flynnduism requested a review from karthik2804 May 2, 2025 18:37
@flynnduism
Copy link
Contributor

Is this example ready to merge?

Comment on lines +13 to +14
The transcription service is a service running the OpenAI Whisper model on an Nvidia RTX 4000 Ada Generation GPU on
an LKE cluster.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The transcription service is a service running the OpenAI Whisper model on an Nvidia RTX 4000 Ada Generation GPU on
an LKE cluster.
The transcription service is a service running the OpenAI Whisper model on an
Nvidia RTX 4000 Ada Generation GPU on an LKE cluster.

spacing nit

[variables]
# This is a currently deployed transcription API endpoint
# This may not be available by the time you run this sample.
transcription_api = { default = "http://172.236.120.221:30001/v1/audio/transcriptions" }
Copy link
Member

@vdice vdice Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As warned in the comment above, I don't think this service is available as of testing today. The api/transcribe internal endpoint returns a 504, seen when inspecting the browser console:

/api/transcribe:1 
            
Failed to load resource: the server responded with a status of 504 (Gateway Time-out)

Interestingly, the app logs via spin aka logs don't show any explicit errors, despite captions/src/index.ts containing error handling, so that might be another item to revisit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do plan on standing the inferencing service up again and merge this example, we might consider adding minimal 'how to build and deploy' steps here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants