0

我正在为我的鼓类制作一个应用程序并使其跨平台,我选择了 Urho.Sharp,因为它具有低级别的声音 API 以及丰富的图形功能。

作为第一步,我正在制作一个节拍器应用程序,为此我正在使用BufferedSoundStream在此处添加音频,然后需要静音,如下所述:https ://github.com/xamarin/urho-samples/blob/master/ FeatureSamples/Core/29_SoundSynthesis/SoundSynthesis.cs

但是产生的声音根本不是声音,就像随机位进入缓冲流一样。

这是我的代码:

///
/// this code initialize sound subsystem
///
void CreateSound()
{
   // Sound source needs a node so that it is considered enabled
   node = new Node();
   SoundSource source = node.CreateComponent<SoundSource>();

   soundStream = new BufferedSoundStream();
   // Set format: 44100 Hz, sixteen bit, stereo
   soundStream.SetFormat(44100, true, true);

   // Start playback. We don't have data in the stream yet, but the 
   SoundSource will wait until there is data,
   // as the stream is by default in the "don't stop at end" mode
   source.Play(soundStream);
}

///
/// this code preload all sound resources
///
readonly Dictionary<PointSoundType, string> SoundsMapping = new Dictionary<PointSoundType, string>
{
    {PointSoundType.beat, "wav/beat.wav"},               
    {PointSoundType.click, "wav/click.wav"},
    {PointSoundType.click_accent, "wav/click_accent.wav"},
    {PointSoundType.crash, "wav/crash.wav"},
    {PointSoundType.foot_hh, "wav/foot_hh.wav"},
    {PointSoundType.hh, "wav/hh.wav"},
    {PointSoundType.open_hh, "wav/open_hh.wav"},
    {PointSoundType.ride, "wav/ride.wav"},
    {PointSoundType.snare, "wav/snare.wav"},
    {PointSoundType.tom_1, "wav/tom_1.wav"},
    {PointSoundType.tom_2, "wav/tom_2.wav"},
};

Dictionary<PointSoundType, Sound> SoundCache = new Dictionary<PointSoundType, Sound>();

private void LoadSoundResources()
{
    // preload all sounds
    foreach (var s in SoundsMapping)
    {
        SoundCache[s.Key] = ResourceCache.GetSound(s.Value);
        Debug.WriteLine("resource loaded: " + s.Value + ", length = " + SoundCache[s.Key].Length);
    }
}

///
/// this code fill up the stream with audio
///
private void UpdateSound()
{
   // Try to keep 1/10 seconds of sound in the buffer, to avoid both dropouts and unnecessary latency
   //float targetLength = 1.0f / 10.0f;

   // temporary increase buffer to 1s
   float targetLength = 1.0f;

   float requiredLength = targetLength - soundStream.BufferLength;
   if (requiredLength < 0.0f)
      return;

   uint numSamples = (uint)(soundStream.Frequency * requiredLength);

   // check if stream is still full
   if (numSamples == 0)
      return;

   var silencePause = new short[44100];

   // iterate and play all sounds 
   SoundCache.All(s =>
   {
      soundStream.AddData(s.Value.Handle, s.Value.DataSize);

      // add silencio
      soundStream.AddData(silencePause, 0, silencePause.Length);

      return true;
   });
}
4

1 回答 1

0

确保您的 wav 文件在资源缓存中。然后不要播放 BufferedSoundStream,而是播放 Urho.Audio.Sound 声音。这只是对同一方法 Urho.Audio.SoundSource.Play() 的不同覆盖,但它有效。

int PlaySound(string sSound)
{
    var cache = Application.Current.ResourceCache;
    Urho.Audio.Sound sound = cache.GetSound(sSound);
    if (sound != null)
    {
        Node soundNode = scene.CreateChild("Sound");
        Urho.Audio.SoundSource soundSource = soundNode.CreateComponent<Urho.Audio.SoundSource>();
        soundSource.Play(sound);
        soundSource.Gain = 0.99f;
        return 1;
    }
    return 0;
}

由于您使用的是 urhosamples,因此您可以从覆盖更新开始每个鼓样本,如下所示:

public float fRun = 0.0f;
public int iRet = 0;         // keep counting the played sounds
public override void OnUpdate(float timeStep)
{
    fRun = fRun + timeStep;
    int iMS = (int)(10f * fRun);  // tenth of seconds 
    if (iMS == 100) iRet = iRet + PlaySound("wav/hh.wav");
    if (iMS == 120) iRet = iRet + PlaySound("wav/hh.wav");
    if (iMS == 140) iRet = iRet + PlaySound("wav/hh.wav"); 
    if (iMS == 160) iRet = iRet + PlaySound("wav/open_hh.wav"); 
    if (iMS >= 160) fRun = 0.8f;
}
于 2017-06-10T19:00:38.290 回答