6

我正在开发一个处理音频数据的 UWP 应用程序(适用于 Windows 10)。它在开始时以样本浮点数组的形式接收样本缓冲区,其中项目从 -1f 变为 1f。早些时候,我使用了 NAudio.dll 1.8.0,它提供了所有必要的功能。使用 WaveFileReader、waveBuffer.FloatBuffer、WaveFileWriter 类。但是,当我完成此应用程序并尝试构建发布版本时,出现此错误:ILT0042:当前不支持指针类型数组:'System.Int32*[]'。

我试图解决它:

  1. https://forums.xamarin.com/discussion/73169/uwp-10-build-fail-arrays-of-pointer-types-error

有建议删除指向 .dll 的链接,但我需要它。

  1. 我尝试使用 Manage NuGet Packages 安装相同版本的 NAudio,但 WaveFileReader、WaveFileWriter 不可用。

  2. 在 NAudio 开发人员的回答(如何使用 NAudio 在 Windows 10 中存储 .wav 文件)中,我已经阅读了有关使用 AudioGraph 的信息,但我只能在实时播放中构建样本的浮点数组,但我需要获取完整的样本以正确打包音频文件上传后。在录制或播放过程中获取样本的示例: https ://docs.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs

这就是我需要帮助的原因:如何在音频文件上传后让 FloatBuffer 处理样本?例如,用于构建音频波或计算应用音频效果。

先感谢您。


  1. 我尝试使用 FileStream 和 BitConverter.ToSingle(),但是,与 NAudio 相比,我得到了不同的结果。换句话说,我仍在寻找解决方案。

     private float[] GetBufferArray()
     {
         string _path = ApplicationData.Current.LocalFolder.Path.ToString() + "/track_1.mp3";
         FileStream _stream = new FileStream(_path, FileMode.Open);
         BinaryReader _binaryReader = new BinaryReader(_stream);
         int _dataSize = _binaryReader.ReadInt32();
         byte[] _byteBuffer = _binaryReader.ReadBytes(_dataSize);
    
         int _sizeFloat = sizeof(float);
         float[] _floatBuffer = new float[_byteBuffer.Length / _sizeFloat];
         for (int i = 0, j = 0; i < _byteBuffer.Length - _sizeFloat; i += _sizeFloat, j++)
         {
             _floatBuffer[j] = BitConverter.ToSingle(_byteBuffer, i);
         }
         return _floatBuffer;
     }
    
4

3 回答 3

3

在 UWP 中从音频文件读取样本的另一种方法是使用 AudioGraph API。它适用于 Windows10 支持的所有音频格式

这是一个示例代码

namespace AudioGraphAPI_read_samples_from_file
{
    // App opens a file using FileOpenPicker and reads samples into array of 
    // floats using AudioGragh API
// Declare COM interface to access AudioBuffer
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
    void GetBuffer(out byte* buffer, out uint capacity);
}

public sealed partial class MainPage : Page
{
    StorageFile mediaFile;

    AudioGraph audioGraph;
    AudioFileInputNode fileInputNode;
    AudioFrameOutputNode frameOutputNode;

    /// <summary>
    /// We are going to fill this array with audio samples
    /// This app loads only one channel 
    /// </summary>
    float[] audioData;
    /// <summary>
    /// Current position in audioData array for loading audio samples 
    /// </summary>
    int audioDataCurrentPosition = 0;

    public MainPage()
    {
        this.InitializeComponent();            
    }

    private async void Open_Button_Click(object sender, RoutedEventArgs e)
    {
        // We ask user to pick an audio file
        FileOpenPicker filePicker = new FileOpenPicker();
        filePicker.SuggestedStartLocation = PickerLocationId.MusicLibrary;
        filePicker.FileTypeFilter.Add(".mp3");
        filePicker.FileTypeFilter.Add(".wav");
        filePicker.FileTypeFilter.Add(".wma");
        filePicker.FileTypeFilter.Add(".m4a");
        filePicker.ViewMode = PickerViewMode.Thumbnail;
        mediaFile = await filePicker.PickSingleFileAsync();

        if (mediaFile == null)
        {
            return;
        }

        // We load samples from file
        await LoadAudioFromFile(mediaFile);

        // We wait 5 sec
        await Task.Delay(5000);

        if (audioData == null)
        {
            ShowMessage("Error loading samples");
            return;
        }

        // After LoadAudioFromFile method finished we can use audioData
        // For example we can find max amplitude
        float max = audioData[0];
        for (int i = 1; i < audioData.Length; i++)
            if (Math.Abs(audioData[i]) > Math.Abs(max))
                max = audioData[i];
        ShowMessage("Maximum is " + max.ToString());
    }

    private async void ShowMessage(string Message)
    {
        var dialog = new MessageDialog(Message);
        await dialog.ShowAsync();
    }

    private async Task LoadAudioFromFile(StorageFile file)
    {
        // We initialize an instance of AudioGraph
        AudioGraphSettings settings = 
            new AudioGraphSettings(
                Windows.Media.Render.AudioRenderCategory.Media
                );
        CreateAudioGraphResult result1 = await AudioGraph.CreateAsync(settings);
        if (result1.Status != AudioGraphCreationStatus.Success)
        {
            ShowMessage("AudioGraph creation error: " + result1.Status.ToString());
        }
        audioGraph = result1.Graph;

        if (audioGraph == null)
            return;

        // We initialize FileInputNode
        CreateAudioFileInputNodeResult result2 = 
            await audioGraph.CreateFileInputNodeAsync(file);
        if (result2.Status != AudioFileNodeCreationStatus.Success)
        {
            ShowMessage("FileInputNode creation error: " + result2.Status.ToString());
        }
        fileInputNode = result2.FileInputNode;

        if (fileInputNode == null)
            return;

        // We read audio file encoding properties to pass them to FrameOutputNode creator
        AudioEncodingProperties audioEncodingProperties = fileInputNode.EncodingProperties;

        // We initialize FrameOutputNode and connect it to fileInputNode
        frameOutputNode = audioGraph.CreateFrameOutputNode(audioEncodingProperties);
        fileInputNode.AddOutgoingConnection(frameOutputNode);

        // We add a handler achiving the end of a file
        fileInputNode.FileCompleted += FileInput_FileCompleted;
        // We add a handler which will transfer every audio frame into audioData 
        audioGraph.QuantumStarted += AudioGraph_QuantumStarted;

        // We initialize audioData
        int numOfSamples = (int)Math.Ceiling(
            (decimal)0.0000001
            * fileInputNode.Duration.Ticks
            * fileInputNode.EncodingProperties.SampleRate
            );
        audioData = new float[numOfSamples];

        audioDataCurrentPosition = 0;

        // We start process which will read audio file frame by frame
        // and will generated events QuantumStarted when a frame is in memory
        audioGraph.Start();

    }

    private void FileInput_FileCompleted(AudioFileInputNode sender, object args)
    {
        audioGraph.Stop();
    }

    private void AudioGraph_QuantumStarted(AudioGraph sender, object args)
    {
        AudioFrame frame = frameOutputNode.GetFrame();
        ProcessInputFrame(frame);

    }

    unsafe private void ProcessInputFrame(AudioFrame frame)
    {
        using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Read))
        using (IMemoryBufferReference reference = buffer.CreateReference())
        {
            // We get data from current buffer
            ((IMemoryBufferByteAccess)reference).GetBuffer(
                out byte* dataInBytes,
                out uint capacityInBytes
                );
            // We discard first frame; it's full of zeros because of latency
            if (audioGraph.CompletedQuantumCount == 1) return;

            float* dataInFloat = (float*)dataInBytes;
            uint capacityInFloat = capacityInBytes / sizeof(float);
            // Number of channels defines step between samples in buffer
            uint step = fileInputNode.EncodingProperties.ChannelCount;
            // We transfer audio samples from buffer into audioData
            for (uint i = 0; i < capacityInFloat; i += step)
            {
                if (audioDataCurrentPosition < audioData.Length)
                {
                    audioData[audioDataCurrentPosition] = dataInFloat[i];
                    audioDataCurrentPosition++;
                }
            }
        }
    }
}

}

已编辑:它解决了问题,因为它将文件中的样本读取到浮点数组中

于 2017-10-08T10:53:11.187 回答
3

从 Wav 文件中获取 AudioData 的第一种流行方式。

感谢 PI 用户的回答How to read the data in a wav file to an array,我已经解决了 UWP 项目中浮点数组中 wav 文件读取的问题。但是当它使用 AudioGraph 记录在 wav 文件中时,文件的结构不同于标准的结构(也许,只有在我的项目中存在这样的问题)。它导致不可预测的结果。我们收到 value1263424842 而不是可预测的 544501094 获取格式 ID。之后,以下所有值都显示不正确。我在字节中顺序搜索找到了正确的 id。我意识到 AudioGraph 在录制的 wav 文件中添加了额外的数据块,但录制的格式仍然是 PCM。这个额外的数据块看起来像关于文件格式的数据,但它也包含空值、空字节。我找不到任何相关信息,也许这里有人知道?PI 的解决方案我已经根据我的需要进行了更改。这就是我所拥有的:

           using (FileStream fs = File.Open(filename, FileMode.Open))
            {
                BinaryReader reader = new BinaryReader(fs);

                int chunkID = reader.ReadInt32();
                int fileSize = reader.ReadInt32();
                int riffType = reader.ReadInt32();
                int fmtID;

                long _position = reader.BaseStream.Position;
                while (_position != reader.BaseStream.Length-1)
                {
                    reader.BaseStream.Position = _position;
                    int _fmtId = reader.ReadInt32();
                    if (_fmtId == 544501094) {
                        fmtID = _fmtId;
                        break;
                    }
                    _position++;
                }
                int fmtSize = reader.ReadInt32();
                int fmtCode = reader.ReadInt16();

                int channels = reader.ReadInt16();
                int sampleRate = reader.ReadInt32();
                int byteRate = reader.ReadInt32();
                int fmtBlockAlign = reader.ReadInt16();
                int bitDepth = reader.ReadInt16();

                int fmtExtraSize;
                if (fmtSize == 18)
                {
                    fmtExtraSize = reader.ReadInt16();
                    reader.ReadBytes(fmtExtraSize);
                }

                int dataID = reader.ReadInt32();
                int dataSize = reader.ReadInt32();

                byte[] byteArray = reader.ReadBytes(dataSize);

                int bytesForSamp = bitDepth / 8;
                int samps = dataSize / bytesForSamp;

                float[] asFloat = null;
                switch (bitDepth)
                {
                    case 16:
                        Int16[] asInt16 = new Int16[samps];
                        Buffer.BlockCopy(byteArray, 0, asInt16, 0, dataSize);
                        IEnumerable<float> tempInt16 =
                            from i in asInt16
                            select i / (float)Int16.MaxValue;
                        asFloat = tempInt16.ToArray();
                        break;
                    default:
                        return false;
                }

                //For one channel wav audio
                floatLeftBuffer.AddRange(asFloat);

从缓冲区到文件记录具有逆算法。目前,这是唯一一种用于处理 wav 文件的正确算法,它允许获取音频数据。使用这篇文章使用 AudioGraph - https://docs.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs。请注意,您可以使用从 MIC 到文件的 AudioEncodingQuality recirdung 设置记录格式的必要数据。

从 Nugget 包中使用 NAudio 获取 AudioData 的第二种方法。

我使用了 MediaFoundationReader 类。

        float[] floatBuffer;
        using (MediaFoundationReader media = new MediaFoundationReader(path))
        {
            int _byteBuffer32_length = (int)media.Length * 2;
            int _floatBuffer_length = _byteBuffer32_length / sizeof(float);

            IWaveProvider stream32 = new Wave16ToFloatProvider(media);
            WaveBuffer _waveBuffer = new WaveBuffer(_byteBuffer32_length);
            stream32.Read(_waveBuffer, 0, (int)_byteBuffer32_length);
            floatBuffer = new float[_floatBuffer_length];

            for (int i = 0; i < _floatBuffer_length; i++) {
                floatBuffer[i] = _waveBuffer.FloatBuffer[i];
            }
        }

比较我注意到的两种方式:

  • 收到的样本值相差 1/1 000 000。我不能说哪种方法更精确(如果你知道,会很高兴听到);
  • 获取 AudioData 的第二种方法也适用于 MP3 文件。

如果您发现任何错误或对此有意见,欢迎。

于 2017-03-14T12:38:24.437 回答
1

进口声明

using NAudio.Wave;
using NAudio.Wave.SampleProviders;

内部函数

AudioFileReader reader = new AudioFileReader(filename);
ISampleProvider isp = reader.ToSampleProvider();
float[] buffer = new float[reader.Length / 2];
isp.Read(buffer, 0, buffer.Length);

缓冲区数组将具有 32 位 IEEE 浮点样本。这是使用 NAudio Nuget Package Visual Studio。

于 2020-06-29T21:31:16.980 回答