1

我有一个基于 2d 画布的网络音频游戏,其中包括使用网络音频 api 位于画布上特定像素坐标处的空间化音频源。

虽然我已经成功地使用网络音频 pannerNode 在画布元素上精确定位每个音频源,如下所示:

var canvas = document.getElementById("map");
var context = canvas.getContext("2d");
function audioFileLoader(fileDirectory) {
  var soundObj = {};
  var playSound = undefined;
  var panner = undefined;
  var gainNode = undefined;
  var getSound = new XMLHttpRequest();
  soundObj.fileDirectory = fileDirectory;
  getSound.open("GET", soundObj.fileDirectory, true);
  getSound.responseType = "arraybuffer";
  getSound.onload = function() {
audioContext.decodeAudioData(getSound.response, function(buffer) { soundObj.soundToPlay = buffer;
}); };
  getSound.send();
panner = audioContext.createPanner();
panner.panningModel = 'HRTF'; 

  soundObj.position = function(x,y,z) {
      panner.setPosition(x,y,z);
  };

我现在正在尝试使用 Resonance Audio Web SDK 升级音频空间化,这样我就可以使用它可以说是更高级的音频空间化特性。

如何使用 Resonance Audio 的 setPosition 以像素 (x,y) 为单位定义画布元素上音频源的位置?

我似乎无法弄清楚如何将原生共振音频比例(米)转换为我的画布元素上的像素坐标。我假设如果我能解决这个问题,我会在 2d 游戏中定义不同音频房间的大小和位置,这将非常酷。

谢谢。

4

1 回答 1

0

因此,如果您在画布上以像素为单位获取您想要定位源的坐标,然后使用相同的单位(像素)来定位和更新听者的位置,那么一切都很好。只要您对源和听者使用相同的单位,它们就会保持相互关联,共振音频空间化就会起作用:

// Set some global variables            
            var canvas = document.getElementById("map");
            var context = canvas.getContext("2d");
            var mouseX;
            var mouseY;

// Map event functions 

// Get mouse coordinates on the map element

            function updateCoords() {
                mouseX = event.offsetX;
                mouseY = event.offsetY;
            }

// Create mouse event functions

            function moveAroundMap(event) {
                updateCoords();
                mapX.innerText = mouseX;
                mapY.innerText = mouseY;

// Update the listener position on the canvas in pixels (x,y)
                resonanceAudioScene.setListenerPosition(mouseX,mouseY,-20); // elevate the listener rather than lowering the sources
            }

            map.addEventListener("mousemove", moveAroundMap, false);


// Create an AudioContext
        let audioContext = new AudioContext();

        // Create a (third-order Ambisonic) Resonance Audio scene and pass it
        // the AudioContext.
        let resonanceAudioScene = new ResonanceAudio(audioContext);
        resonanceAudioScene.setAmbisonicOrder(3);

        // Connect the scene’s binaural output to stereo out.
        resonanceAudioScene.output.connect(audioContext.destination);

        // Create an AudioElement.
        let audioElement = document.createElement('audio');

        // Load an audio file into the AudioElement.
        audioElement.src = './samples/mono-seagulls.mp3';
        audioElement.loop = true;

        // Generate a MediaElementSource from the AudioElement.
        let audioElementSource = audioContext.createMediaElementSource(audioElement);

        // Add the MediaElementSource to the scene as an audio input source.
        let source = resonanceAudioScene.createSource();
        audioElementSource.connect(source.input);

        // Set the source position relative to the listener
        source.setPosition(140, 150, 0);
于 2020-05-30T23:09:09.037 回答