Adding support for unencapsulated opus and exposing max_data_bytes #229
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Reopening this PR against a stable dev branch.
3 main changes are present here that I'd like to get merged in.
A change to autogen.sh that disables neon support in speexdsp. This was required to compile the fork currently available on master via clang/Osx.
The optimized neon library includes inline asm. From this thread it appears that emscripten doesn't support inline asm instructions.
The next two are functional changes. I work on a streaming service with some hardware support that doesn't utilize a container based approach to streaming the audio. The changes are:
A new parameter rawOpus
This prevents encapsulation of the opus frames into an ogg container. Each encoded opus frame is ejected as soon as it's
available. The frames are not segmented, nor are they placed into Ogg pages.
A new parameter encoderOutputMaxLength.
This exposes the parameter to the main recorder API that maps onto max_data_bytes as part of the libOpus opus_encode_float function. The hardware devices we stream data into has some limitations around frame sizes that need to be controlled. This step saves me from having to re-encode the frame after it's ejected from this recorder.
I added a couple unit tests, but not super complete. I'd need to get an encodable frame plugged into the unit tests to validate at a lower level.