0

嗨,我想制作一个在 iOS 设备之间进行视频通话的应用程序。我研究过 opentok 和 idoubs,但我想从一开始就自己做。我搜索了很多,但找不到任何解决方案。我试图以一种我认为视频聊天的方式来实现这一点。直到现在我已经完成了以下事情(通过使用流媒体 bonjour 教程):

  1. 创建 avcapture 会话并在其中获取 cmsamplebufferref 数据

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection{
    
    
    
    if( captureOutput == _captureOutput ){
    
    
    
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    
    
    //Lock the image buffer//
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    //Get information about the image//
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    
    
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    
    
    
    
    //Create a CGImageRef from the CVImageBufferRef//
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    
    //We release some components
    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);
    
    
    
    
    
    previewImage= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
    CGImageRelease(newImage);
    
    [uploadImageView performSelectorOnMainThread:@selector(setImage:) withObject:previewImage waitUntilDone:YES];
    
    
    
    
    //We unlock the  image buffer//
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    
    [pool drain];
    
    
    
    
    
    
     [self sendMIxedData:@"video1"];
    
    
    
    
    
    }
    
    
    else if( captureOutput == _audioOutput){
    
    
    
        dataA= [[NSMutableData alloc] init];
        CMBlockBufferRef blockBuffer;
        CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &currentInputAudioBufferList, sizeof(currentInputAudioBufferList), NULL, NULL, 0, &blockBuffer);
          //CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &bufferList, sizeof(bufferList), NULL, NULL, 0, &blockBuffer);
    
    
    
    
        for (int y = 0; y < currentInputAudioBufferList.mNumberBuffers; y++) {
            AudioBuffer audioBuffer = currentInputAudioBufferList.mBuffers[y];
            Float32 *frame = (Float32*)audioBuffer.mData;
    
            [dataA appendBytes:frame length:audioBuffer.mDataByteSize];
    
    
    
    
        }
    
      [self sendMIxedData:@"audio"];
      }   
    
  2. 现在 sendMixeddata 方法正在将这些视频/音频字节写入 NSStream。

    NSData *data = UIImageJPEGRepresentation([self scaleAndRotateImage:previewImage], 1.0);
    
      const uint8_t *message1 = (const uint8_t *)[@"video1" UTF8String];
    
     [_outStream write:message1 maxLength:strlen((char *)message1)];
    
     [_outStream write:(const uint8_t *)[data bytes] maxLength:[data length]];
     const uint8_t *message1 = (const uint8_t *)[@"audio" UTF8String];
    
     [_outStream write:message1 maxLength:strlen((char *)message1)];
    
    
      [_outStream write:(const uint8_t *)[dataA bytes] maxLength:[dataA length]];
    
  3. 现在字节在接收设备上的 nsstream 委托方法中接收

    • 现在的问题是我不知道这是不是视频聊天的方式

    • 我也没有成功如何使用接收字节显示为视频。

    • 我尝试通过发送带有字节的“audio”和“video1”字符串来了解它的视频或音频。我也尝试过不使用额外的字符串。图像被正确接收和显示,但音频非常失真。

    • 请告诉我这是否是制作视频聊天应用程序的正确方法。如果是,那么我应该怎么做才能使其可用。例如:我应该一起发送音频/视频数据而不是像我的示例那样单独发送。这里我使用简单的 bonjour 教程,但我将如何使用真实的服务器实现相同的效果

    当我被困在这里时,请指导我正确的方向。

谢谢 (抱歉格式化。我试过但无法正确格式化)

4

1 回答 1

0

Video streaming apps use video codecs like vp8 or h.264, which will beat your JPEG encoded frames.

You should be able to display you received NSData by doing...

UIImage *image = [UIImage imageWithData:data];
于 2013-09-20T19:52:58.447 回答