我制作了一个视频播放器,可以从当前播放的视频中分析实时音频和视频轨道。视频存储在 iOS 设备上(在 Apps Documents 目录中)。
这一切都很好。我使用 MTAudioProcessingTap 来获取所有音频样本并进行一些 FFT,并且我通过从当前播放的 CMTime(AVPlayer currentTime 属性)中复制像素缓冲区来分析视频。正如我所说,这很好用。
但现在我想支持Airplay。只是 Airplay 本身并不难,但只要切换 Airplay 并且视频在 ATV 上播放,我的水龙头就会停止工作。不知何故,MTAudioProcessingTap 不会处理,并且像素缓冲区都是空的……我无法获取数据。
有没有办法获取这些数据?
为了获得像素缓冲区,我每隔几毫秒触发一个事件并检索玩家的 currentTime。然后:
CVPixelBufferRef imageBuffer = [videoOutput copyPixelBufferForItemTime:time itemTimeForDisplay:nil];
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *tempAddress = (uint8_t *) CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
我的像素缓冲区在哪里tempAddress
,并且videoOutput
是AVPlayerItemVideoOutput
.
对于音频,我使用:
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
// Create a processing tap for the input parameters
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
MTAudioProcessingTapRef tap;
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err || !tap) {
NSLog(@"Unable to create the Audio Processing Tap");
return;
}
inputParams.audioTapProcessor = tap;
// Create a new AVAudioMix and assign it to our AVPlayerItem
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = @[inputParams];
playerItem.audioMix = audioMix;
问候,尼克