经过一番挖掘,我找到了一个库,它可以在编写 .mp4 文件时从文件中提取 NAL 单元。我正在尝试使用 和 将这些信息打包到通过 RTMP 进行的libavformat
flv libavcodec
。我使用以下方法设置视频流:
-(void)setupVideoStream {
int ret = 0;
videoCodec = avcodec_find_decoder(STREAM_VIDEO_CODEC);
if (videoCodec == nil) {
NSLog(@"Could not find encoder %i", STREAM_VIDEO_CODEC);
return;
}
videoStream = avformat_new_stream(oc, videoCodec);
videoCodecContext = videoStream->codec;
videoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
videoCodecContext->codec_id = STREAM_VIDEO_CODEC;
videoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
videoCodecContext->profile = FF_PROFILE_H264_BASELINE;
videoCodecContext->bit_rate = 512000;
videoCodecContext->bit_rate_tolerance = 0;
videoCodecContext->width = STREAM_WIDTH;
videoCodecContext->height = STREAM_HEIGHT;
videoCodecContext->time_base.den = STREAM_TIME_BASE;
videoCodecContext->time_base.num = 1;
videoCodecContext->gop_size = STREAM_GOP;
videoCodecContext->has_b_frames = 0;
videoCodecContext->ticks_per_frame = 2;
videoCodecContext->qcompress = 0.6;
videoCodecContext->qmax = 51;
videoCodecContext->qmin = 10;
videoCodecContext->max_qdiff = 4;
videoCodecContext->i_quant_factor = 0.71;
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
videoCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
videoCodecContext->extradata = avcCHeader;
videoCodecContext->extradata_size = avcCHeaderSize;
ret = avcodec_open2(videoStream->codec, videoCodec, NULL);
if (ret < 0)
NSLog(@"Could not open codec!");
}
然后我连接,每次库提取一个 NALU 时,它都会返回一个包含一个或两个 NALU 的数组到我的RTMPClient
. 处理实际流的方法如下所示:
-(void)writeNALUToStream:(NSArray*)data time:(double)pts {
int ret = 0;
uint8_t *buffer = NULL;
int bufferSize = 0;
// Number of NALUs within the data array
int numNALUs = [data count];
// First NALU
NSData *fNALU = [data objectAtIndex:0];
int fLen = [fNALU length];
// If there is more than one NALU...
if (numNALUs > 1) {
// Second NALU
NSData *sNALU = [data objectAtIndex:1];
int sLen = [sNALU length];
// Allocate a buffer the size of first data and second data
buffer = av_malloc(fLen + sLen);
// Copy the first data bytes of fLen into the buffer
memcpy(buffer, [fNALU bytes], fLen);
// Copy the second data bytes of sLen into the buffer + fLen + 1
memcpy(buffer + fLen + 1, [sNALU bytes], sLen);
// Update the size of the buffer
bufferSize = fLen + sLen;
}else {
// Allocate a buffer the size of first data
buffer = av_malloc(fLen);
// Copy the first data bytes of fLen into the buffer
memcpy(buffer, [fNALU bytes], fLen);
// Update the size of the buffer
bufferSize = fLen;
}
// Initialize the packet
av_init_packet(&pkt);
//av_packet_from_data(&pkt, buffer, bufferSize);
// Set the packet data to the buffer
pkt.data = buffer;
pkt.size = bufferSize;
pkt.pts = pts;
// Stream index 0 is the video stream
pkt.stream_index = 0;
// Add a key frame flag every 15 frames
if ((processedFrames % 15) == 0)
pkt.flags |= AV_PKT_FLAG_KEY;
// Write the frame to the stream
ret = av_interleaved_write_frame(oc, &pkt);
if (ret < 0)
NSLog(@"Error writing frame %i to stream", processedFrames);
else {
// Update the number of frames successfully streamed
frameCount++;
// Update the number of bytes successfully sent
bytesSent += pkt.size;
}
// Update the number of frames processed
processedFrames++;
// Update the number of bytes processed
processedBytes += pkt.size;
free((uint8_t*)buffer);
// Free the packet
av_free_packet(&pkt);
}
大约 100 帧左右后,我收到一个错误:
malloc: *** error for object 0xe5bfa0: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
我似乎无法阻止这种情况的发生。我已经尝试注释掉该av_free_packet()
方法以及free()
尝试使用av_packet_from_data()
而不是初始化数据包并设置数据和大小值。
我的问题是;我怎样才能阻止这个错误的发生,根据wireshark的说法,这些是正确的RTMP h264数据包,但它们只播放黑屏。我忽略了一些明显的错误吗?