有人建议从头到尾读取音频数据并创建一个从头到尾写入的副本,然后简单地播放反转的音频数据。
是否有 iOS 的现有示例如何做到这一点?
我找到了一个名为 MixerHost 的示例项目,它在某些时候使用
AudioUnitSampleType
保存从文件中读取的音频数据,并将其分配给缓冲区。
这被定义为:
typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24
根据苹果的说法:
iPhone OS 中音频单元和其他音频处理的规范音频样本类型是具有 8.24 位定点样本的非交错线性 PCM。
所以换句话说,它保存了非交错线性 PCM 音频数据。
但我无法弄清楚这些数据在哪里被读取,以及它存储在哪里。这是加载音频数据并对其进行缓冲的代码:
- (void) readAudioFilesIntoMemory {
for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) {
NSLog (@"readAudioFilesIntoMemory - file %i", audioFile);
// Instantiate an extended audio file object.
ExtAudioFileRef audioFileObject = 0;
// Open an audio file and associate it with the extended audio file object.
OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);
if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: @"ExtAudioFileOpenURL" withStatus: result]; return;}
// Get the audio file's length in frames.
UInt64 totalFramesInFile = 0;
UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileLengthFrames,
&frameLengthPropertySize,
&totalFramesInFile
);
if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}
// Assign the frame count to the soundStructArray instance variable
soundStructArray[audioFile].frameCount = totalFramesInFile;
// Get the audio file's number of channels.
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat
);
if (noErr != result) {[self printErrorMessage: @"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}
UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;
// Allocate memory in the soundStructArray instance variable to hold the left channel,
// or mono, audio data
soundStructArray[audioFile].audioDataLeft =
(AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
AudioStreamBasicDescription importFormat = {0};
if (2 == channelCount) {
soundStructArray[audioFile].isStereo = YES;
// Sound is stereo, so allocate memory in the soundStructArray instance variable to
// hold the right channel audio data
soundStructArray[audioFile].audioDataRight =
(AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
importFormat = stereoStreamFormat;
} else if (1 == channelCount) {
soundStructArray[audioFile].isStereo = NO;
importFormat = monoStreamFormat;
} else {
NSLog (@"*** WARNING: File format not supported - wrong number of channels");
ExtAudioFileDispose (audioFileObject);
return;
}
// Assign the appropriate mixer input bus stream data format to the extended audio
// file object. This is the format used for the audio data placed into the audio
// buffer in the SoundStruct data structure, which is in turn used in the
// inputRenderCallback callback function.
result = ExtAudioFileSetProperty (
audioFileObject,
kExtAudioFileProperty_ClientDataFormat,
sizeof (importFormat),
&importFormat
);
if (noErr != result) {[self printErrorMessage: @"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}
// Set up an AudioBufferList struct, which has two roles:
//
// 1. It gives the ExtAudioFileRead function the configuration it
// needs to correctly provide the data to the buffer.
//
// 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so
// that audio data obtained from disk using the ExtAudioFileRead function
// goes to that buffer
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (@"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType);
bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
if (2 == channelCount) {
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType);
bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight;
}
// Perform a synchronous, sequential read of the audio data out of the file and
// into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;
result = ExtAudioFileRead (
audioFileObject,
&numberOfPacketsToRead,
bufferList
);
free (bufferList);
if (noErr != result) {
[self printErrorMessage: @"ExtAudioFileRead failure - " withStatus: result];
// If reading from the file failed, then free the memory for the sound buffer.
free (soundStructArray[audioFile].audioDataLeft);
soundStructArray[audioFile].audioDataLeft = 0;
if (2 == channelCount) {
free (soundStructArray[audioFile].audioDataRight);
soundStructArray[audioFile].audioDataRight = 0;
}
ExtAudioFileDispose (audioFileObject);
return;
}
NSLog (@"Finished reading file %i into memory", audioFile);
// Set the sample index to zero, so that playback starts at the
// beginning of the sound.
soundStructArray[audioFile].sampleNumber = 0;
// Dispose of the extended audio file object, which also
// closes the associated file.
ExtAudioFileDispose (audioFileObject);
}
}
哪个部分包含必须反转的音频样本数组?是AudioUnitSampleType
吗?
bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
注意:audioDataLeft 被定义为一个AudioUnitSampleType
,它是一个 SInt32 但不是一个数组。
我在Core Audio 邮件列表中找到了一条线索:
好吧,据我所知,与 iPh*n* 无关(除非省略了某些音频 API ——我不是该程序的成员)。AFAIR、AudioFile.h 和 ExtendedAudioFile.h 应该为您提供读取或写入 caf 并访问其流/通道所需的内容。基本上,您想向后读取每个通道/流,因此,如果您不需要音频文件的属性,那么一旦您处理了该通道的数据,假设它不是压缩格式,它就非常简单。考虑到 caf 可以表示的格式数量,这可能需要比您想象的多几行代码。一旦你处理了未压缩的数据,它应该和反转字符串一样简单。然后你当然会用反转的数据替换文件的数据,
这是我尝试过的,但是当我将反向缓冲区分配给两个通道的 mData 时,我什么也没听到:
AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
reversedData[j] = leftData[i];
j++;
}