1

我最近遇到了 libyuv 崩溃。
我尝试了很多,但没有用。
请帮助或尝试提供一些想法如何实现这一目标。谢谢!

我有一个 iOS 项目(目标 C)。其中一项功能是对视频流进行编码。
我的想法是
步骤 1:启动计时器(20 FPS)
步骤 2:复制并获取位图数据
步骤 3:将位图数据传输到 YUV I420(libyuv
步骤 4:编码为 h264 格式(Openh264
步骤 5:发送使用 RTSP 的 h264 数据
所有功能都在前台运行。

它适用于 3~4 小时。
但它总是会在 4 小时后崩溃。
检查 CPU(39%)、内存(140MB),是否稳定(无内存泄漏、CPU 忙等)。
我尝试了很多,但没有用(包括在我的项目中添加 try-catch,在此行中运行之前检测数据大小)
我发现如果减少 FPS 时间(20FPS -> 15FPS)它会运行更多
它是否需要在每帧编码后添加一些东西?
有人可以帮助我或为此提供一些想法吗?谢谢!

// This function runs in a GCD timer
- (void)processSDLFrame:(NSData *)_frameData {
    if (mH264EncoderPtr == NULL) {
        [self initEncoder];
        return;
    }

    int argbSize = mMapWidth * mMapHeight * 4;

    NSData *frameData = [[NSData alloc] initWithData:_frameData];
    if ([frameData length] == 0 || [frameData length] != argbSize) {
        NSLog(@"Incorrect frame with size : %ld\n", [frameData length]);
        return;
    }

    SFrameBSInfo info;
    memset(&info, 0, sizeof (SFrameBSInfo));

    SSourcePicture pic;
    memset(&pic, 0, sizeof (SSourcePicture));
    pic.iPicWidth = mMapWidth;
    pic.iPicHeight = mMapHeight;
    pic.uiTimeStamp = [[NSDate date] timeIntervalSince1970];

    @try {
        libyuv::ConvertToI420(
            static_cast<const uint8 *>([frameData bytes]), // sample
            argbSize, // sample_size
            mDstY, // dst_y
            mStrideY, // dst_stride_y
            mDstU, // dst_u
            mStrideU, // dst_stride_u
            mDstV, // dst_v
            mStrideV, // dst_stride_v
            0, // crop_x
            0, // crop_y
            mMapWidth, // src_width
            mMapHeight, // src_height
            mMapWidth, // crop_width
            mMapHeight, // crop_height
            libyuv::kRotateNone, // rotation
            libyuv::FOURCC_ARGB); // fourcc

    } @catch (NSException *exception) {
        NSLog(@"libyuv::ConvertToI420 - exception:%@", exception.reason);
        return;
    }

    pic.iColorFormat = videoFormatI420;
    pic.iStride[0] = mStrideY;
    pic.iStride[1] = mStrideU;
    pic.iStride[2] = mStrideV;

    pic.pData[0] = mDstY;
    pic.pData[1] = mDstU;
    pic.pData[2] = mDstV;

    if (mH264EncoderPtr == NULL) {
        NSLog(@"OpenH264Manager - encoder not initialized");
        return;
    }

    int rv = -1;
    @try {
        rv = mH264EncoderPtr->EncodeFrame(&pic, &info);

    } @catch (NSException *exception) {
        NSLog( @"NSException caught - mH264EncoderPtr->EncodeFrame" );
        NSLog( @"Name: %@", exception.name);
        NSLog( @"Reason: %@", exception.reason );

        [self deinitEncoder];
        return;
    }

    if (rv != cmResultSuccess) {
        NSLog(@"OpenH264Manager - encode failed : %d", rv);
        [self deinitEncoder];
        return;
    }

    if (info.eFrameType == videoFrameTypeSkip) {
        NSLog(@"OpenH264Manager - drop skipped frame");
        return;
    }

    // handle buffer data
    int size = 0;
    int layerSize[MAX_LAYER_NUM_OF_FRAME] = { 0 };

    for (int layer = 0; layer < info.iLayerNum; layer++) {
        for (int i = 0; i < info.sLayerInfo[layer].iNalCount; i++) {
            layerSize[layer] += info.sLayerInfo[layer].pNalLengthInByte[i];
        }
        size += layerSize[layer];
    }

    uint8 *output = (uint8 *)malloc(size);
    size = 0;

    for (int layer = 0; layer < info.iLayerNum; layer++) {
        memcpy(output + size, info.sLayerInfo[layer].pBsBuf, layerSize[layer]);
        size += layerSize[layer];
    }

    // alloc new buffer for streaming
    NSData *newData = [NSData dataWithBytes:output length:size];

    // Send the data with RTSP
    sendData( newData );

    // free output buffer data
    free(output);
}

在此处输入图像描述

在此处输入图像描述



[2020 年 1 月 8 日更新]

我在 Google 问题报告中报告了这张票
https://bugs.chromium.org/p/libyuv/issues/detail?id=853


谷歌员工给我一个反馈。

ARGBToI420 does no allocations.  Its similar to a memcpy with a source and destination and number of pixels to convert.
The most common issues with it are
1. the destination buffer has been deallocated.  Try adding validation that the YUV buffer is valid.  Write to the first and last byte of each layer.
This often occurs on shutdown and threads dont shut down in the order you were hoping.  A mutex to guard the memory could help.

2. the destination is an odd size and the allocator did not allocate enough memory.  When alllocating the UV plane, use (width + 1) / 2 for width/stride and (height + 1) / 2 for height of UV.   Allocate stride * height bytes.  You could also use an allocator that verifies there are no overreads or overwrites, or a sanitizer like asan / msan.
When screen casting, usually windows are a multiple of 2 pixels on Windows and Linux, but I have seen MacOS use odd pixel count.

As a test you could wrap the function with temporary buffers.  Copy the ARGB to a temporary ARGB buffer.
Call ARGBToI420 to a temporary I420 buffer.
Copy the I420 result to the final I420 buffer.
That should give you a clue which buffer/function is failing.


我会试试的。

4

0 回答 0