9

我想将 PCM ( CMSampleBufferRef(s) going live from AVCaptureAudioDataOutputSampleBufferDelegate) 编码为 AAC。

当第一个CMSampleBufferRef到达时,我AudioStreamBasicDescription根据文档设置(输入/输出)(s),“输出”

AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(sampleBuffer));

AudioStreamBasicDescription outAudioStreamBasicDescription = {0}; // Always initialize the fields of a new audio stream basic description structure to zero, as shown here: ...
outAudioStreamBasicDescription.mSampleRate = 44100; // The number of frames per second of the data in the stream, when the stream is played at normal speed. For compressed formats, this field indicates the number of frames per second of equivalent decompressed data. The mSampleRate field must be nonzero, except when this structure is used in a listing of supported formats (see “kAudioStreamAnyRate”).
outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC; // kAudioFormatMPEG4AAC_HE does not work. Can't find `AudioClassDescription`. `mFormatFlags` is set to 0.
outAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_SSR; // Format-specific flags to specify details of the format. Set to 0 to indicate no format flags. See “Audio Data Format Identifiers” for the flags that apply to each format.
outAudioStreamBasicDescription.mBytesPerPacket = 0; // The number of bytes in a packet of audio data. To indicate variable packet size, set this field to 0. For a format that uses variable packet size, specify the size of each packet using an AudioStreamPacketDescription structure.
outAudioStreamBasicDescription.mFramesPerPacket = 1024; // The number of frames in a packet of audio data. For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC. For formats with a variable number of frames per packet, such as Ogg Vorbis, set this field to 0.
outAudioStreamBasicDescription.mBytesPerFrame = 0; // The number of bytes from the start of one frame to the start of the next frame in an audio buffer. Set this field to 0 for compressed formats. ...
outAudioStreamBasicDescription.mChannelsPerFrame = 1; // The number of channels in each frame of audio data. This value must be nonzero.
outAudioStreamBasicDescription.mBitsPerChannel = 0; // ... Set this field to 0 for compressed formats.
outAudioStreamBasicDescription.mReserved = 0; // Pads the structure out to force an even 8-byte alignment. Must be set to 0.

AudioConverterRef

AudioClassDescription audioClassDescription;
memset(&audioClassDescription, 0, sizeof(audioClassDescription));
UInt32 size;
NSAssert(AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size) == noErr, nil);
uint32_t count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
NSAssert(AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size, descriptions) == noErr, nil);
for (uint32_t i = 0; i < count; i++) {

    if ((outAudioStreamBasicDescription.mFormatID == descriptions[i].mSubType) && (kAppleSoftwareAudioCodecManufacturer == descriptions[i].mManufacturer)) {

        memcpy(&audioClassDescription, &descriptions[i], sizeof(audioClassDescription));

    }
}
NSAssert(audioClassDescription.mSubType == outAudioStreamBasicDescription.mFormatID && audioClassDescription.mManufacturer == kAppleSoftwareAudioCodecManufacturer, nil);
AudioConverterRef audioConverter;
memset(&audioConverter, 0, sizeof(audioConverter));
NSAssert(AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, 1, &audioClassDescription, &audioConverter) == 0, nil);

然后,我将每个转换CMSampleBufferRef为原始 AAC 数据。

AudioBufferList inAaudioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inAaudioBufferList, sizeof(inAaudioBufferList), NULL, NULL, 0, &blockBuffer);
NSAssert(inAaudioBufferList.mNumberBuffers == 1, nil);

uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;
uint8_t *buffer = (uint8_t *)malloc(bufferSize);
memset(buffer, 0, bufferSize);
AudioBufferList outAudioBufferList;
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = inAaudioBufferList.mBuffers[0].mNumberChannels;
outAudioBufferList.mBuffers[0].mDataByteSize = bufferSize;
outAudioBufferList.mBuffers[0].mData = buffer;

UInt32 ioOutputDataPacketSize = 1;

NSAssert(AudioConverterFillComplexBuffer(audioConverter, inInputDataProc, &inAaudioBufferList, &ioOutputDataPacketSize, &outAudioBufferList, NULL) == 0, nil);

NSData *data = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];

free(buffer);
CFRelease(blockBuffer);

inInputDataProc()执行:

OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
    AudioBufferList audioBufferList = *(AudioBufferList *)inUserData;

    ioData->mBuffers[0].mData = audioBufferList.mBuffers[0].mData;
    ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize;

    return  noErr;
}

现在,data保存了我的原始 AAC,我用适当的 ADTS 标头将其包装到 ADTS 帧中,并且这些 ADTS 帧的序列是可播放的 AAC 文档。

但是我并没有像我想的那样理解这段代码。一般来说,我不理解音频...我只是在博客、论坛和文档之后以某种方式写了它,花了很长时间,现在它可以工作了,但我不知道为什么以及如何更改一些参数。所以这是我的问题:

  1. 我需要在硬件编码器被占用时使用这个转换器(被AVAssetWriter)。这就是为什么我通过AudioConverterNewSpecific()而不是通过 SW 转换器AudioConverterNew()。但是现在设置outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC_HE;不起作用。找不到AudioClassDescription。即使设置为 0。使用( ) overmFormatFlags我会失去什么?我应该使用什么进行直播?还是?kAudioFormatMPEG4AACkMPEG4Object_AAC_SSRkAudioFormatMPEG4AAC_HEkMPEG4Object_AAC_SSRkMPEG4Object_AAC_Main

  2. 如何正确更改采样率?例如,如果我设置outAudioStreamBasicDescription.mSampleRate为 22050 或 8000,音频播放就会变慢。我在 ADTS 标头中将采样频率索引设置为与原样相同的频率outAudioStreamBasicDescription.mSampleRate

  3. 如何更改比特率?ffmpeg -i 显示生成的 aac 的此信息: Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 64 kb/s。例如,如何将其更改为 16 kbps?随着我降低频率,比特率正在降低,但我相信这不是唯一的方法?正如我在 2 中提到的那样,降低频率会损坏播放。

  4. 大小如何计算buffer?现在我将它设置为,uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;因为我相信压缩格式不会比未压缩格式大......但这不是不必要的太多吗?

  5. 如何ioOutputDataPacketSize正确设置?如果我得到正确的文档,我应该将它设置为UInt32 ioOutputDataPacketSize = bufferSize / outAudioStreamBasicDescription.mBytesPerPacket;mBytesPerPacket为 0。如果我将它设置为 0,则AudioConverterFillComplexBuffer()返回错误。如果我将它设置为 1,它可以工作,但我不知道为什么......

  6. 里面有inInputDataProc()3 个“out”参数。我刚刚设置ioData。我还应该设置ioNumberDataPacketsandoutDataPacketDescription吗?为什么以及如何?

4

1 回答 1

0

在将音频馈送到 AAC 转换器之前,您可能需要使用重采样音频单元来更改原始音频数据的采样率。否则 AAC 标头和音频数据之间将不匹配。

于 2013-11-08T20:18:54.270 回答