0

我在用 C++ 编写的 gstreamer 元素的GstVideoFrame结构中接收视频帧。

帧为 YUV420 NV12 格式

在这个 gstreamer 元素中,我试图将 y-frame 和 uv-frame 复制到单独的缓冲区中。

根据videolan.org,YUV420 NV12 数据在传入帧缓冲区中存储如下:(从网站复制的信息)

NV12:

Related to I420, NV12 has one luma "luminance" plane Y and one plane with U and V values interleaved.

In NV12, chroma planes (blue and red) are subsampled in both the horizontal and vertical dimensions by a factor of 2.

For a 2×2 group of pixels, you have 4 Y samples and 1 U and 1 V sample.

It can be helpful to think of NV12 as I420 with the U and V planes interleaved.

Here is a graphical representation of NV12. Each letter represents one bit:

    For 1 NV12 pixel: YYYYYYYY UVUV
    For a 2-pixel NV12 frame: YYYYYYYYYYYYYYYY UVUVUVUV
    For a 50-pixel NV12 frame: Y×8×50 (UV)×2×50
    For a n-pixel NV12 frame: Y×8×n (UV)×2×n

但我似乎无法计算缓冲区中 y-data 和 uv-data 的偏移量。


Update_1:我有框架的高度和宽度来计算 y-data 和 uv-data 的大小:
y_size = width * height;
uv_size = y_size / 2;

任何有关此的帮助或意见将不胜感激。

谢谢

4

1 回答 1

0

感谢@Ext3h,这就是我能够将 y-data 和 uv-data 从传入的 YUV 帧中分离出来的方法。

y_data = (u8 *)GST_VIDEO_FRAME_PLANE_DATA (in_frame, 0);
uv_data =(u8 *)GST_VIDEO_FRAME_PLANE_DATA (in_frame, 1);
于 2020-08-13T02:03:37.220 回答