Replies: 1 comment 2 replies
-
The playback itself probably isn’t the problem. The media element natively streams audio via byte range requests. The problem is the fetching and decoding of audio as it requires the entire file to be downloaded. You can circumvent this by pre-decoding the audio on the server. This is covered in the FAQ in the readme, there’s a link to a CLI tool for decoding. You can probably also use ffmpeg for that. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have an app that is intended for users to load long duration (1 hour +) audio files, and see the spectrogram of that file.
Currently, I use ffmpeg to segment the audio and deliver 20-90 second chunks to wavesurfer. A user hits 'play' and the app will play the first 20-90s of audio.
When the ws 'finish' event is fired, the app asks for the next 20-90sec chunk of audio and when that is loaded, calls ws.play(). Rinse and repeat, until the file ends.
It's really clunky because there is a delay at each chunk boundary, but for my use case, it's just about acceptable. Users are looking for sound events, so a pause every so often is kinda OK. But far from ideal.
What I'd really really like is for the audio to play smoothly for whole hour+ and for the spectrogram to scroll on touch swipe. Better still, (but not not important) would be for that to work with the new mel and log spectrograms.
What is the best solution for this? Note: I have tried requesting the next file chunk and queuing an audio buffer, but the spec rendering seems to be the cause of the delay. Is there a way to render the next 20-90 second buffer in the background?
Beta Was this translation helpful? Give feedback.
All reactions