10

我正在尝试使用 AVAssetReader 读取视频文件并将音频传递给 CoreAudio 进行处理(添加效果和内容),然后再使用 AVAssetWriter 将其保存回磁盘。我想指出,如果我将输出节点的 AudioComponentDescription 上的 componentSubType 设置为 RemoteIO,则可以通过扬声器正确播放。这让我确信我的 AUGraph 设置正确,因为我可以听到一切正常。我将 subType 设置为 GenericOutput,这样我就可以自己进行渲染并取回调整后的音频。

我正在阅读音频并将 CMSampleBufferRef 传递给 copyBuffer。这会将音频放入一个循环缓冲区,稍后将读取该缓冲区。

- (void)copyBuffer:(CMSampleBufferRef)buf {  
    if (_readyForMoreBytes == NO)  
    {  
        return;  
    }  

    AudioBufferList abl;  
    CMBlockBufferRef blockBuffer;  
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);  

    UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);  
    BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);  

    if (!bytesCopied){  
        /  
        _readyForMoreBytes = NO;  

        if (size > kRescueBufferSize){  
            NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame");  
        } else {  
            if (rescueBuffer == nil) {  
                rescueBuffer = malloc(kRescueBufferSize);  
            }  

            rescueBufferSize = size;  
            memcpy(rescueBuffer, abl.mBuffers[0].mData, size);  
        }  
    }  

    CFRelease(blockBuffer);  
    if (!self.hasBuffer && bytesCopied > 0)  
    {  
        self.hasBuffer = YES;  
    }  
} 

接下来我调用 processOutput。这将在 outputUnit 上进行手动重新编辑。当调用 AudioUnitRender 时,它会调用下面的playbackCallback,这就是我第一个节点上作为输入回调连接的内容。PlaybackCallback 从循环缓冲区中提取数据并将其馈送到传入的 audioBufferList 中。就像我之前所说的,如果将输出设置为 RemoteIO,这将导致音频在扬声器上正确播放。当 AudioUnitRender 完成时,它返回 noErr 并且 bufferList 对象包含有效数据。 当我调用 CMSampleBufferSetDataBufferFromAudioBufferList 虽然我得到 kCMSampleBufferError_RequiredParameterMissing (-12731)

-(CMSampleBufferRef)processOutput  
{  
    if(self.offline == NO)  
    {  
        return NULL;  
    }  

    AudioUnitRenderActionFlags flags = 0;  
    AudioTimeStamp inTimeStamp;  
    memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));  
    inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;  
    UInt32 busNumber = 0;  

    UInt32 numberFrames = 512;  
    inTimeStamp.mSampleTime = 0;  
    UInt32 channelCount = 2;  

    AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));  
    bufferList->mNumberBuffers = channelCount;  
    for (int j=0; j<channelCount; j++)  
    {  
        AudioBuffer buffer = {0};  
        buffer.mNumberChannels = 1;  
        buffer.mDataByteSize = numberFrames*sizeof(SInt32);  
        buffer.mData = calloc(numberFrames,sizeof(SInt32));  

        bufferList->mBuffers[j] = buffer;  

    }  
    CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit");  

    CMSampleBufferRef sampleBufferRef = NULL;  
    CMFormatDescriptionRef format = NULL;  
    CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };  
    AudioStreamBasicDescription audioFormat = self.audioFormat;  
    CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate");  
    CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate");  
    CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList");  

    return sampleBufferRef;  
} 


static OSStatus playbackCallback(void *inRefCon,  
                                 AudioUnitRenderActionFlags *ioActionFlags,  
                                 const AudioTimeStamp *inTimeStamp,  
                                 UInt32 inBusNumber,  
                                 UInt32 inNumberFrames,  
                                 AudioBufferList *ioData)  
{  
    int numberOfChannels = ioData->mBuffers[0].mNumberChannels;  
    SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;  

    /  
    memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);  

    MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;  

    if (p.hasBuffer){  
        int32_t availableBytes;  
        SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);  

        int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;  

        int bytesToRead = MIN(availableBytes, requestedBytesSize);  
        memcpy(outSample, bufferTail, bytesToRead);  
        TPCircularBufferConsume([p getBuffer], bytesToRead);  

        if (availableBytes <= requestedBytesSize*2){  
            [p setReadyForMoreBytes];  
        }  

        if (availableBytes <= requestedBytesSize) {  
            p.hasBuffer = NO;  
        }    
    }  
    return noErr;  
} 

我传入的 CMSampleBufferRef 看起来有效(下面是来自调试器的对象转储)

CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180  
  invalid = NO  
  dataReady = NO  
  makeDataReadyCallback = 0x0  
  makeDataReadyRefcon = 0x0  
  formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {  
  mediaType:'soun'  
  mediaSubType:'lpcm'  
  mediaSpecific: {  
  ASBD: {  
  mSampleRate: 44100.000000  
  mFormatID: 'lpcm'  
  mFormatFlags: 0xc2c  
  mBytesPerPacket: 2  
  mFramesPerPacket: 1  
  mBytesPerFrame: 2  
  mChannelsPerFrame: 1  
  mBitsPerChannel: 16 }  
  cookie: {(null)}  
  ACL: {(null)}  
  }  
  extensions: {(null)}  
}  
  sbufToTrackReadiness = 0x0  
  numSamples = 512  
  sampleTimingArray[1] = {  
  {PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},  
  }  
  dataBuffer = 0x0  

缓冲区列表如下所示

Printing description of bufferList:  
(AudioBufferList *) bufferList = 0x00007f87d280b0a0  
Printing description of bufferList->mNumberBuffers:  
(UInt32) mNumberBuffers = 2  
Printing description of bufferList->mBuffers:  
(AudioBuffer [1]) mBuffers = {  
  [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)  
}  

在这里真的很茫然,希望有人能帮忙。谢谢,

万一这很重要,我正在 ios 8.3 模拟器中调试它,音频来自我在 iphone 6 上拍摄的 mp4,然后保存到我的笔记本电脑。

我已阅读以下问题,但仍然无济于事,一切都无法正常工作。

如何将 AudioBufferList 转换为 CMSampleBuffer?

将 AudioBufferList 转换为 CMSampleBuffer 会产生意外结果

CMSampleBufferSetDataBufferFromAudioBufferList 返回错误 12731

核心音频离线渲染 GenericOutput

更新

我又看了看,注意到当我的 AudioBufferList 在 AudioUnitRender 运行之前看起来像这样:

bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048

mDataByteSize 是 numberFrames*sizeof(SInt32),也就是 512 * 4。当我查看在playbackCallback 中传递的AudioBufferList 时,列表看起来是这样的:

bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024

不太确定其他缓冲区的去向,或者其他 1024 字节大小...

如果当我完成给雷德纳的电话,如果我做这样的事情

AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;

并将 newbuff 传递给 CMSampleBufferSetDataBufferFromAudioBufferList 错误消失。

如果我尝试将 BufferList 的大小设置为 1 mNumberBuffers 或其 mDataByteSize 为 numberFrames*sizeof(SInt16) 我在调用 AudioUnitRender 时得到 -50

更新 2

我连接了一个渲染回调,以便在通过扬声器播放声音时检查输出。我注意到到扬声器的输出也有一个带有 2 个缓冲区的 AudioBufferList,输入回调期间的 mDataByteSize 是 1024,而在渲染回调中是 2048,这与我在手动调用 AudioUnitRender 时看到的相同。当我检查渲染的 AudioBufferList 中的数据时,我注意到 2 个缓冲区中的字节是相同的,这意味着我可以忽略第二个缓冲区。但是我不确定如何处理数据在渲染后的大小为 2048 而不是被接收时的 1024 的事实。关于为什么会发生这种情况的任何想法?在通过音频图之后,它是不是更多的是原始形式,这就是为什么大小加倍的原因?

4

1 回答 1

1

听起来您正在处理的问题是因为频道数量存在差异。您看到 2048 块而不是 1024 块的数据的原因是因为它向您反馈了两个通道(立体声)。检查以确保您的所有音频单元都正确配置为在整个音频图中使用单声道,包括音高单元和任何音频格式描述。

需要特别注意的一件事是调用AudioUnitSetProperty可能会失败 - 因此请务必将它们也包含CheckError()在内。

于 2015-07-29T19:55:26.500 回答