Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnknownError: add decoded String #14

Open
lue-bird opened this issue Aug 11, 2022 · 7 comments
Open

UnknownError: add decoded String #14

lue-bird opened this issue Aug 11, 2022 · 7 comments

Comments

@lue-bird
Copy link

lue-bird commented Aug 11, 2022

I've run into an UnknownError after loading the example audio file.
Looking through the code, there's the possibility of adding the actual decoded js error as a String to the variant.

If it helps debugging, here's my game that uses elm-audio with vite

@MartinSStewart
Copy link
Owner

MartinSStewart commented Aug 12, 2022

Unfortunately UnknownError can't include the actual error message. The reason why is because elm-audio's model and msg type can't contain any functions (I want it to be compatible with Lamdera) but that means when Audio.loadAudio is called with a user provided Result LoadError Audio -> msg function, elm-audio needs to immediately call that function with every possible parameter so that it can avoid storing a function in the model. If LoadError has a variant that contains a string, that is no longer possible.

Anyway, back to the issue at hand. Are you getting any error message in the network tab that indicates what's going wrong? If that doesn't help, try using this version of the elm-audio js code which should print out the error you're getting (send it to me so I can add it as a LoadError variant)

Debug version
function startAudio(app)
{
    window.AudioContext = window.AudioContext || window.webkitAudioContext || false;
    if (window.AudioContext) {
        let audioBuffers = []
        let context = new AudioContext();
        let audioPlaying = {};
        /* https://lame.sourceforge.io/tech-FAQ.txt
         * "All *decoders* I have tested introduce a delay of 528 samples. That
         * is, after decoding an mp3 file, the output will have 528 samples of
         * 0's appended to the front."
         *
         * Edit: Actually it seems like browsers already account for this lets set this to 0 instead.
         */
        let mp3MarginInSamples = 0;

        app.ports.audioPortFromJS.send({ type: 2, samplesPerSecond: context.sampleRate });

        function loadAudio(audioUrl, requestId) {
            let request = new XMLHttpRequest();
            request.open('GET', audioUrl, true);

            request.responseType = 'arraybuffer';

            request.onerror = function() {
                console.log("Network error");
                app.ports.audioPortFromJS.send({ type: 0, requestId: requestId, error: "NetworkError" });
            }

            // Decode asynchronously
            request.onload = function() {
                context.decodeAudioData(request.response, function(buffer) {
                    let bufferId = audioBuffers.length;

                    let isMp3 = audioUrl.endsWith(".mp3");
                    // TODO: Read the header of the ArrayBuffer before decoding to an AudioBuffer https://www.mp3-tech.org/programmer/frame_header.html
                    // need to use DataViews to read from the ArrayBuffer
                    audioBuffers.push({ isMp3: isMp3, buffer: buffer });

                    app.ports.audioPortFromJS.send({
                        type: 1,
                        requestId: requestId,
                        bufferId: bufferId,
                        durationInSeconds: (buffer.length - (isMp3 ? mp3MarginInSamples : 0)) / buffer.sampleRate
                    });
                }, function(error) {
                    console.log(error);
                    app.ports.audioPortFromJS.send({ type: 0, requestId: requestId, error: error.message });
                });
            }
            request.send();
        }

        function posixToContextTime(posix, currentTimePosix) {
            return (posix - currentTimePosix) / 1000 + context.currentTime;
        }

        function setLoop(sourceNode, loop, mp3MarginInSeconds) {
            if (loop) {
                sourceNode.loopStart = mp3MarginInSeconds + loop.loopStart / 1000;
                sourceNode.loopEnd = mp3MarginInSeconds + loop.loopEnd / 1000;
                sourceNode.loop = true;
            }
            else {
                sourceNode.loop = false;
            }
        }

        function interpolate(startAt, startValue, endAt, endValue, time) {
            let t = (time - startAt) / (endAt - startAt);
            if (Number.isFinite(t)) {
                return t * (endValue - startValue) + startValue;
            }
            else {
                return startValue;
            }
        }

        function createVolumeTimelineGainNodes(volumeAt, currentTime) {
            return volumeAt.map(volumeTimeline => {
                let gainNode = context.createGain();

                gainNode.gain.setValueAtTime(volumeTimeline[0].volume, 0);
                gainNode.gain.linearRampToValueAtTime(volumeTimeline[0].volume, 0);
                let currentTime_ = posixToContextTime(currentTime, currentTime);

                for (let j = 1; j < volumeTimeline.length; j++) {
                    let previous = volumeTimeline[j-1];
                    let previousTime = posixToContextTime(previous.time, currentTime);
                    let next = volumeTimeline[j];
                    let nextTime = posixToContextTime(next.time, currentTime);

                    if (nextTime > currentTime_ && currentTime_ >= previousTime) {
                        let currentVolume = interpolate(previousTime, previous.volume, nextTime, next.volume, currentTime_);
                        gainNode.gain.setValueAtTime(currentVolume, 0);
                        gainNode.gain.linearRampToValueAtTime(next.volume, nextTime);

                    }
                    else if (nextTime > currentTime_) {
                        gainNode.gain.linearRampToValueAtTime(next.volume, nextTime);
                    }
                    else {
                        gainNode.gain.setValueAtTime(next.volume, 0);
                    }
                }


                return gainNode;
            });
        }

        function connectNodes(nodes) {
            for (let j = 1; j < nodes.length; j++) {
                nodes[j-1].connect(nodes[j]);
            }
        }

        function playSound(audioBuffer, volume, volumeTimelines, startTime, startAt, currentTime, loop, playbackRate) {
            let buffer = audioBuffer.buffer;
            let mp3MarginInSeconds = audioBuffer.isMp3
                ? mp3MarginInSamples / context.sampleRate
                : 0;
            let source = context.createBufferSource();

            if (loop) {
                // Add an extra 10 seconds so there's some room if the loopEnd gets moved back later
                let durationInSeconds = 10 + (loop.loopEnd / 1000) - (buffer.length / buffer.sampleRate);
                if (durationInSeconds > 0) {

                    let sampleCount = buffer.getChannelData(0).length + Math.ceil(durationInSeconds * buffer.sampleRate);
                    let newBuffer = context.createBuffer(buffer.numberOfChannels, sampleCount, context.sampleRate);

                    for (let i = 0; i < buffer.numberOfChannels; i++) {
                        newBuffer.copyToChannel(buffer.getChannelData(i), i);
                    }
                    source.buffer = newBuffer
                }
                else {
                    source.buffer = buffer;
                }
            }
            else {
                source.buffer = buffer;
            }

            source.playbackRate.value = playbackRate;
            setLoop(source, loop, mp3MarginInSeconds);

            let timelineGainNodes = createVolumeTimelineGainNodes(volumeTimelines, currentTime);

            let gainNode = context.createGain();
            gainNode.gain.setValueAtTime(volume, 0);

            connectNodes([source, gainNode, ...timelineGainNodes, context.destination]);

            if (startTime >= currentTime) {
                source.start(posixToContextTime(startTime, currentTime), mp3MarginInSeconds + startAt / 1000);
            }
            else {
                // TODO: offset should account for looping
                let offset = (currentTime - startTime) / 1000;
                source.start(0, offset + mp3MarginInSeconds + startAt / 1000);
            }

            return { sourceNode: source, gainNode: gainNode, volumeAtGainNodes: timelineGainNodes };
        }

        app.ports.audioPortToJS.subscribe( ( message ) => {
            let currentTime = new Date().getTime();
            for (let i = 0; i < message.audio.length; i++) {
                let audio = message.audio[i];
                switch (audio.action)
                {
                    case "stopSound":
                    {
                        let value = audioPlaying[audio.nodeGroupId];
                        audioPlaying[audio.nodeGroupId] = null;
                        value.nodes.sourceNode.stop();
                        value.nodes.sourceNode.disconnect();
                        value.nodes.gainNode.disconnect();
                        value.nodes.volumeAtGainNodes.map(node => node.disconnect());
                        break;
                    }
                    case "setVolume":
                    {
                        let value = audioPlaying[audio.nodeGroupId];
                        value.nodes.gainNode.gain.setValueAtTime(audio.volume, 0);
                        break;
                    }
                    case "setVolumeAt":
                    {
                        let value = audioPlaying[audio.nodeGroupId];
                        value.nodes.volumeAtGainNodes.map(node => node.disconnect());
                        value.nodes.gainNode.disconnect();

                        let newGainNodes = createVolumeTimelineGainNodes(audio.volumeAt, currentTime);

                        connectNodes([value.nodes.gainNode, ...newGainNodes, context.destination]);

                        value.nodes.volumeAtGainNodes = newGainNodes;
                        break;
                    }
                    case "setLoopConfig":
                    {
                        let value = audioPlaying[audio.nodeGroupId];
                        let audioBuffer = audioBuffers[value.bufferId];
                        let mp3MarginInSeconds = audioBuffer.isMp3
                            ? mp3MarginInSamples / context.sampleRate
                            : 0;

                        /* TODO: Resizing the buffer if the loopEnd value is past the end of the buffer.
                           This might not be possible to do so the alternative is to create a new audio
                           node (this will probably cause a popping sound and audio that is slightly out of sync).
                         */

                        setLoop(value.nodes.sourceNode, audio.loop, mp3MarginInSeconds);
                        break;
                    }
                    case "setPlaybackRate":
                    {
                        let value = audioPlaying[audio.nodeGroupId];
                        value.nodes.sourceNode.playbackRate.setValueAtTime(audio.playbackRate, 0);
                        break;
                    }
                    case "startSound":
                    {
                        let nodes = playSound(
                            audioBuffers[audio.bufferId],
                            audio.volume,
                            audio.volumeTimelines,
                            audio.startTime,
                            audio.startAt,
                            currentTime,
                            audio.loop,
                            audio.playbackRate);
                        audioPlaying[audio.nodeGroupId] = { bufferId: audio.bufferId, nodes: nodes };
                        break;
                    }
                }
            }

            for (let i = 0; i < message.audioCmds.length; i++) {
                loadAudio(message.audioCmds[i].audioUrl, message.audioCmds[i].requestId);
            }
        });
    }
    else {
        console.log("Web audio is not supported in your browser.");
    }
}

@lue-bird
Copy link
Author

lue-bird commented Aug 13, 2022

Thanks for the help. The console in firefox 103.0 outputs

[vite-plugin-elm] HMR enabled

An AudioContext was prevented from starting automatically. It must be created or resumed after a user gesture on the page.

[vite-plugin-elm] ports. audioPortToJS.subscribe called

followed by

Uncaught (in promise) DOMException: The buffer passed to decodeAudioData contains an unknown content type


Re: LoadError can't have a variant that contains a string;
What triggered my issue was seeing

elm-audio/src/Audio.elm

Lines 588 to 599 in 974765c

case value of
"NetworkError" ->
JD.succeed NetworkError
"MediaDecodeAudioDataUnknownContentType" ->
JD.succeed FailedToDecode
"DOMException: The buffer passed to decodeAudioData contains an unknown content type." ->
JD.succeed FailedToDecode
_ ->
JD.succeed UnknownError

and thinking: Isn't it possible to

         unknownError -> 
             JD.succeed (UnknownError unknownError)

and I guess I'm still missing something to see how this is related to "UnknownError isn't constructed directly and stored in the model as a function" 😅

@MartinSStewart
Copy link
Owner

MartinSStewart commented Aug 14, 2022

Uncaught (in promise) DOMException: The buffer passed to decodeAudioData contains an unknown content type

Can you check the network tab and see exactly what the response looks like? My guess is that your request to https://cors-anywhere.herokuapp.com/https://freepd.com/music/Wakka%20Wakka.mp3 is returning something other than a song (cors-anywhere doesn't like being treated like a file hosting service so maybe when this song is used within the ellie example and with your app, the http request returns an error or something?)

@MartinSStewart
Copy link
Owner

As for storing the error message in the model, yes it can be stored in elm-audio's model. The trouble is getting it to the users model via a msg the user has provided.

I guess one solution is I could store the error in AudioData since that gets passed into the user's update function. Then there could be getErrorMessage: AudioData -> LoadError -> String that the user could call. I'll need to think about this some more.

@lue-bird
Copy link
Author

lue-bird commented Aug 17, 2022

  • loading from cors-anywhere.herokuapp.com → 403 forbidden (your prediction was probably right)
    • response headers
      HTTP/1.1 403 Forbidden
      Server: Cowboy
      Connection: keep-alive
      Access-Control-Allow-Origin: *
      Location: /corsdemo
      Date: Wed, 17 Aug 2022 16:51:09 GMT
      Transfer-Encoding: chunked
      Via: 1.1 vegur
    • request headers
      GET /https://freepd.com/music/Wakka%20Wakka.mp3 HTTP/1.1
      Host: cors-anywhere.herokuapp.com
      User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0
      Accept: */*
      Accept-Language: en-US,en;q=0.5
      Accept-Encoding: gzip, deflate, br
      Origin: http://localhost:3000
  • loading from github.com repo → blocked. Reason: CORS header ‘Access-Control-Allow-Origin’ missing (probably rightfully so?)

No clue if that's "correct", my current "workaround" (?) is to refer to a local file in code via its path from root

@MartinSStewart
Copy link
Owner

MartinSStewart commented Aug 17, 2022

No clue if that's "correct", my current "workaround" (?) is to refer to a local file in code via their path from root

I'm not sure if I understand but typically you place the file in public/mySound.mp3?

@lue-bird
Copy link
Author

yeah, that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants