2

I am using the OpenTok iOS sdk to stream from iphone to chrome. What I would like to do is record a high res version of the video while streaming.

Using a custom video capturer via the OTVideoCapture interface from Example 2 Let's Build OTPublisher, I can successfully record the video sample buffer to file. The problem is, I cannot find any reference to the audio data gathered from the microphone.

I assume its using a audioInput(AVCaptureDeviceInput), to an audioOutput(AVCaptureAudioDataOutput) via AVCaptureAudioDataOutputSampleBufferDelegate is used somewhere.

Does anyone know how to access it from the OpenTok iOS SDK?

4

1 回答 1

2

captureOutput:didOutputSampleBuffer:fromConnection , fromConnection 字段将区分音频和声音连接并提供相应的缓冲区。

要设置音频输入/输出,您可以尝试使用 Let-Build-OTPublisher initCapture 方法

    //add audio input / outputs
AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
if([_captureSession canAddInput:_audioInput])
{
    NSLog(@"added audio device input");
    [_captureSession addInput:_audioInput];
}

_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if([_captureSession canAddOutput:_audioOutput])
{
    NSLog(@"audio output added");
    [_captureSession addOutput:_audioOutput];
}


[_audioOutput setSampleBufferDelegate:self queue:_capture_queue];
于 2014-08-25T20:52:17.387 回答