1

我在尝试使用 AVFramework 框架和 AVAsset 创建 ProRes 编码的 mov 文件时遇到了一些问题。

在 OSX 10.10.5 上,使用 XCode 7,链接到 10.9 库。到目前为止,我已经设法创建了包含视频和多个音频通道的有效 ProRes 文件。

(我正在创建多个未压缩的 48K、16 位 PCM 音频轨道)

添加视频帧效果很好,添加音频帧效果很好,或者至少在代码中成功。

但是,当我回放文件时,似乎音频帧在 12、13、14 或 15 帧序列中重复。

查看波形,从 *.mov 很容易看到重复的音频...

也就是说,前 13 或 X 个视频帧都包含完全相同的音频,然后在下一个 X 中再次重复,然后一次又一次,等等......

视频很好,只是音频似乎在循环/重复。

无论我使用多少音频通道/音轨作为源,问题都会出现,我仅使用 1 个音轨以及 4 个和 8 个音轨进行了测试。

它与我提供给系统的样本格式和数量无关,即使用 720p60、1080p23 和 1080i59 都表现出相同的错误行为。

  • 实际上,720p 捕获似乎重复音频帧 30 或 31 次,而 1080 格式仅重复音频帧 12 或 13 次,

但我肯定将不同的音频数据提交到音频编码/SampleBuffer 创建过程,因为我已经非常详细地记录了这一点(但下面的代码中没有显示)

我尝试了许多不同的方法来修改代码并暴露问题,但没有成功,因此我在这里问,希望有人可以看到我的代码存在问题或给我一些关于这个问题的信息。

我正在使用的代码如下:

int main(int argc, const char * argv[])
{
    @autoreleasepool
    {
        NSLog(@"Hello, World!  - Welcome to the ProResCapture With Audio sample app. ");
        OSStatus status;
        AudioStreamBasicDescription audioFormat;
        CMAudioFormatDescriptionRef audioFormatDesc;

        // OK so lets include the hardware stuff first and then we can see about doing some actual capture  and compress stuff
        HARDWARE_HANDLE pHardware = sdiFactory();
        if (pHardware)
        {
            unsigned long ulUpdateType = UPD_FMT_FRAME;
            unsigned long ulFieldCount = 0;
            unsigned int numAudioChannels = 4; //8; //4;
            int numFramesToCapture = 300;

            gBFHancBuffer = (unsigned int*)myAlloc(gHANC_SIZE);

            int audioSize = 2002 * 4 * 16;
            short* pAudioSamples = (short*)new char[audioSize];
            std::vector<short*> vecOfNonInterleavedAudioSamplesPtrs;
            for (int i = 0; i < 16; i++)
            {
                vecOfNonInterleavedAudioSamplesPtrs.push_back((short*)myAlloc(2002 * sizeof(short)));
            }

            bool bVideoModeIsValid = SetupAndConfigureHardwareToCaptureIncomingVideo();

            if (bVideoModeIsValid)
            {

                gBFBytes = (BLUE_UINT32*)myAlloc(gGoldenSize);

                bool canAddVideoWriter = false;
                bool canAddAudioWriter = false;
                int nAudioSamplesWritten = 0;

                // declare the vars for our various AVAsset elements
                AVAssetWriter* assetWriter = nil;
                AVAssetWriterInput* assetWriterInputVideo = nil;
                AVAssetWriterInput* assetWriterAudioInput[16];


                AVAssetWriterInputPixelBufferAdaptor* adaptor = nil;
                NSURL* localOutputURL = nil;
                NSError* localError = nil;

                // create the file we are goijmng to be writing to
                localOutputURL = [NSURL URLWithString:@"file:///Volumes/Media/ProResAVCaptureAnyFormat.mov"];

                assetWriter = [[AVAssetWriter alloc] initWithURL: localOutputURL fileType:AVFileTypeQuickTimeMovie error:&localError];
                if (assetWriter)
                {
                    assetWriter.shouldOptimizeForNetworkUse = NO;

                    // Lets configure the Audio and Video settings for this writer...
                    {
                          // Video First.

                          // Add a video input
                          // create a dictionary with the settings we want ie. Prores capture and width and height.
                          NSMutableDictionary* videoSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
                                                                AVVideoCodecAppleProRes422, AVVideoCodecKey,
                                                                [NSNumber numberWithInt:width], AVVideoWidthKey,
                                                                [NSNumber numberWithInt:height], AVVideoHeightKey,
                                                                nil];

                          assetWriterInputVideo = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings:videoSettings];
                          adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterInputVideo
                                                                                                     sourcePixelBufferAttributes:nil];

                          canAddVideoWriter = [assetWriter canAddInput:assetWriterInputVideo];
                    }

                    { // Add a Audio AssetWriterInput

                          // Create a dictionary with the settings we want ie. Uncompressed PCM audio 16 bit little endian.
                          NSMutableDictionary* audioSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
                                                                [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
                                                                [NSNumber numberWithFloat:48000.0], AVSampleRateKey,
                                                                [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                                                [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                                                [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
                                                                [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
                                                                [NSNumber numberWithUnsignedInteger:1], AVNumberOfChannelsKey,
                                                                nil];

                          // OR use... FillOutASBDForLPCM(AudioStreamBasicDescription& outASBD, Float64 inSampleRate, UInt32 inChannelsPerFrame, UInt32 inValidBitsPerChannel, UInt32 inTotalBitsPerChannel, bool inIsFloat, bool inIsBigEndian, bool inIsNonInterleaved = false)
                          UInt32 inValidBitsPerChannel = 16;
                          UInt32 inTotalBitsPerChannel = 16;
                          bool inIsFloat = false;
                          bool inIsBigEndian = false;
                          UInt32 inChannelsPerTrack = 1;
                          FillOutASBDForLPCM(audioFormat, 48000.00, inChannelsPerTrack, inValidBitsPerChannel, inTotalBitsPerChannel, inIsFloat, inIsBigEndian);

                          status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
                                                                  &audioFormat,
                                                                  0,
                                                                  NULL,
                                                                  0,
                                                                  NULL,
                                                                  NULL,
                                                                  &audioFormatDesc
                                                                  );

                          for (int t = 0; t < numAudioChannels; t++)
                          {
                              assetWriterAudioInput[t] = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSettings];
                              canAddAudioWriter = [assetWriter canAddInput:assetWriterAudioInput[t] ];

                              if (canAddAudioWriter)
                              {
                                  assetWriterAudioInput[t].expectsMediaDataInRealTime = YES; //true;
                                  [assetWriter addInput:assetWriterAudioInput[t] ];
                              }
                          }


                          CMFormatDescriptionRef myFormatDesc = assetWriterAudioInput[0].sourceFormatHint;
                          NSString* medType = [assetWriterAudioInput[0] mediaType];
                    }

                    if(canAddVideoWriter)
                    {
                          // tell the asset writer to expect media in real time.
                          assetWriterInputVideo.expectsMediaDataInRealTime = YES; //true;

                          // add the Input(s)
                          [assetWriter addInput:assetWriterInputVideo];

                          // Start writing the frames..
                          BOOL success = true;
                          success = [assetWriter startWriting];
                          CMTime startTime = CMTimeMake(0, fpsRate);
                          [assetWriter startSessionAtSourceTime:kCMTimeZero];
                          // [assetWriter startSessionAtSourceTime:startTime];

                      if (success)
                      {
                          startOurVideoCaptureProcess();

                          // **** possible enhancement is to use a pixelBufferPool to manage multiple buffers at once...
                          CVPixelBufferRef buffer = NULL;
                          int kRecordingFPS = fpsRate;
                          bool frameAdded = false;
                          unsigned int bufferID;


                          for( int i = 0; i < numFramesToCapture; i++)
                          {
                              printf("\n");

                              buffer = pixelBufferFromCard(bufferID, width, height, memFmt); // This function to get a CVBufferREf From our device, as well as getting the Audio data
                              while(!adaptor.assetWriterInput.readyForMoreMediaData)
                              {
                                    printf(" readyForMoreMediaData FAILED \n");
                              }

                              if (buffer)
                              {
                                  // Add video
                                  printf("appending Frame %d ", i);
                                  CMTime frameTime = CMTimeMake(i, kRecordingFPS);
                                  frameAdded = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
                                  if (frameAdded)
                                      printf("VideoAdded.....\n ");

                                  // Add Audio
                                  {
                                      // Do some Processing on the captured data to extract the interleaved Audio Samples for each channel
                                      struct hanc_decode_struct decode;
                                      DecodeHancFrameEx(gBFHancBuffer, decode);
                                      int nAudioSamplesCaptured = 0;
                                      if(decode.no_audio_samples > 0)
                                      {
                                          printf("completed deCodeHancEX, found %d samples \n", ( decode.no_audio_samples  / numAudioChannels) );
                                          nAudioSamplesCaptured = decode.no_audio_samples  / numAudioChannels;
                                      }

                                      CMTime audioTimeStamp = CMTimeMake(nAudioSamplesWritten, 480000); // (Samples Written) / sampleRate for audio


                                      // This function repacks the Audio from interleaved PCM data a vector of individual array of Audio data
                                      RepackDecodedHancAudio((void*)pAudioSamples, numAudioChannels, nAudioSamplesCaptured, vecOfNonInterleavedAudioSamplesPtrs);

                                      for (int t = 0; t < numAudioChannels; t++)
                                      {
                                          CMBlockBufferRef blockBuf = NULL; // ***********  MUST release these AFTER adding the samples to the assetWriter...
                                          CMSampleBufferRef cmBuf = NULL;

                                          int sizeOfSamplesInBytes = nAudioSamplesCaptured * 2;  // always 16bit memory samples...

                                          // Create sample Block buffer for adding to the audio input.
                                          status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                                                                      (void*)vecOfNonInterleavedAudioSamplesPtrs[t],
                                                                                      sizeOfSamplesInBytes,
                                                                                      kCFAllocatorNull,
                                                                                      NULL,
                                                                                      0,
                                                                                      sizeOfSamplesInBytes,
                                                                                      0,
                                                                                      &blockBuf);

                                          if (status != noErr)
                                                NSLog(@"CMBlockBufferCreateWithMemoryBlock error");

                                          status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
                                                                                                   blockBuf,
                                                                                                   TRUE,
                                                                                                   0,
                                                                                                   NULL,
                                                                                                   audioFormatDesc,
                                                                                                   nAudioSamplesCaptured,
                                                                                                   audioTimeStamp,
                                                                                                   NULL,
                                                                                                   &cmBuf);
                                          if (status != noErr)
                                                NSLog(@"CMSampleBufferCreate error");

                                          // leys check if the CMSampleBuf is valid
                                          bool bValid = CMSampleBufferIsValid(cmBuf);

                                          // examine this values for debugging info....
                                          CMTime cmTimeSampleDuration = CMSampleBufferGetDuration(cmBuf);
                                          CMTime cmTimePresentationTime = CMSampleBufferGetPresentationTimeStamp(cmBuf);

                                          if (status != noErr)
                                              NSLog(@"Invalid Buffer found!!! possible CMSampleBufferCreate error?");


                                          if(!assetWriterAudioInput[t].readyForMoreMediaData)
                                              printf(" readyForMoreMediaData FAILED  - Had to Drop a frame\n");
                                          else
                                          {
                                              if(assetWriter.status == AVAssetWriterStatusWriting)
                                              {
                                                  BOOL r = YES;
                                                  r = [assetWriterAudioInput[t] appendSampleBuffer:cmBuf];
                                                  if (!r)
                                                  {
                                                      NSLog(@"appendSampleBuffer error");
                                                  }
                                                  else
                                                      success = true;

                                              }
                                              else
                                                  printf("AssetWriter Not ready???!? \n");
                                        }

                              if (cmBuf)
                              {
                                  CFRelease(cmBuf);
                                  cmBuf = 0;
                              }
                              if(blockBuf)
                              {
                                  CFRelease(blockBuf);
                                  blockBuf = 0;
                              }
                          }
                          nAudioSamplesWritten = nAudioSamplesWritten + nAudioSamplesCaptured;
                      }

                      if(success)
                      {
                          printf("Audio tracks Added..");
                      }
                      else
                      {
                          NSError* nsERR = [assetWriter error];
                          printf("Problem Adding Audio tracks / samples");
                      }
                      printf("Success \n");
                }


              if (buffer)
              {
                  CVBufferRelease(buffer);
              }
          }
      }
      AVAssetWriterStatus sta = [assetWriter status];
      CMTime endTime = CMTimeMake((numFramesToCapture-1), fpsRate);

      if (audioFormatDesc)
      {
          CFRelease(audioFormatDesc);
          audioFormatDesc = 0;
      }

      // Finish the session
      StopVideoCaptureProcess();
      [assetWriterInputVideo markAsFinished];
      for (int t = 0; t < numAudioChannels; t++)
      {
          [assetWriterAudioInput[t] markAsFinished];
      }

      [assetWriter endSessionAtSourceTime:endTime];


      bool finishedSuccessfully = [assetWriter finishWriting];
      if (finishedSuccessfully)
          NSLog(@"Writing file ended successfully \n");
      else
      {
          NSLog(@"Writing file ended WITH ERRORS...");
          sta = [assetWriter status];
          if (sta != AVAssetWriterStatusCompleted)
          {
              NSError* nsERR = [assetWriter error];
              printf("investoigating the error \n");
          }
      }
                    }
                    else
                    {
      NSLog(@"Unable to Add the InputVideo Asset Writer to the AssetWriter, file will not be written - Exiting");
                    }

                    if (audioFormatDesc)
      CFRelease(audioFormatDesc);
                }


                for (int i = 0; i < 16; i++)
                {
                    if (vecOfNonInterleavedAudioSamplesPtrs[i])
                    {
      bfFree(2002 * sizeof(unsigned short), vecOfNonInterleavedAudioSamplesPtrs[i]);
      vecOfNonInterleavedAudioSamplesPtrs[i] = nullptr;
                    }
                }

            }
            else
            {
                NSLog(@"Unable to find a valid input signal - Exiting");
            }


            if (pAudioSamples)
                delete pAudioSamples;
        }
    }
    return 0;
}

这是一个连接到一些特殊硬件的非常基本的示例(省略了相关代码)

它抓取视频和音频的帧,然后处理音频从交错的 PCM 到每个轨道的单个 PCM 数据阵列

然后将每个缓冲区添加到适当的轨道,无论是视频还是音频......

最后,AvAsset 的东西完成并关闭,我退出并清理。

任何帮助将不胜感激,

干杯,

詹姆士

4

1 回答 1

0

好吧,我终于找到了解决此问题的有效解决方案。

解决方案分为两部分:

  1. 我从使用 CMAudioSampleBufferCreateWithPacketDescriptions 转移到使用 CMSampleBufferCreate(..) 和该函数调用的适当参数。

  2. 最初在使用 CMSampleBufferCreate 进行实验时,我误用了一些参数,它给了我与我最初在这里概述的相同的结果,但仔细检查了我为 CMSampleTimingInfo 结构传递的值 - 特别是持续时间部分,我最终一切正常!

所以看起来我正在正确地创建 CMBlockBufferRef,但是在使用它来创建我传递给 AVAssetWriterInput 的 CMSampleBufRef 时我需要更加小心!

希望这对其他人有所帮助,因为这对我来说是一个令人讨厌的问题!

  • 詹姆士
于 2017-01-10T05:41:37.607 回答