4

我想解码 H.264 视频序列并将它们显示在屏幕上。视频序列来自 pi 相机,我使用以下代码捕获

import io
import picamera

stream = io.BytesIO()
while True:
    with picamera.PiCamera() as camera:
        camera.resolution = (640, 480)
        camera.start_recording(stream, format='h264', quality=23)
        camera.wait_recording(15)
        camera.stop_recording()

有什么方法可以解码“流”数据序列并使用 opencv 或其他 python 库显示它们?

4

4 回答 4

6

我找到了一个使用ffmpeg-python的解决方案。
我无法验证 raspberry-pi 中的解决方案,所以我不确定它是否适合您。

假设:

  • stream将整个捕获的 h264 流保存在内存缓冲区中。
  • 您不想将流写入文件。

该解决方案适用以下内容:

  • 在作为输入和输出FFmpeg的子流程中执行。 输入将是视频流(内存缓冲区)。 输出格式是 BGR 像素格式的原始视频帧。 sdtinpipestdoutpipe

  • 将流内容写入pipe(to stdin)。
  • 读取解码的视频(逐帧),并显示每一帧(使用cv2.imshow

这是代码:

import ffmpeg
import numpy as np
import cv2
import io

width, height = 640, 480


# Seek to stream beginning
stream.seek(0)

# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The output format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
    ffmpeg
    .input('pipe:')
    .video
    .output('pipe:', format='rawvideo', pix_fmt='bgr24')
    .run_async(pipe_stdin=True, pipe_stdout=True)
)


# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue())  # Write stream content to the pipe
process.stdin.close()  # close stdin (flush and send EOF)


#Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
    # Read raw video frame from stdout as bytes array.
    in_bytes = process.stdout.read(width * height * 3)

    if not in_bytes:
        break

    # transform the byte read into a numpy array
    in_frame = (
        np
        .frombuffer(in_bytes, np.uint8)
        .reshape([height, width, 3])
    )

    #Display the frame
    cv2.imshow('in_frame', in_frame)

    if cv2.waitKey(100) & 0xFF == ord('q'):
        break

process.wait()
cv2.destroyAllWindows()

注意:我使用sdtinandstdout作为管道(而不是使用命名管道),因为我希望代码也可以在 Windows 中工作。


为了测试解决方案,我创建了一个示例视频文件,并将其读入内存缓冲区(编码为 H.264)。
我使用内存缓冲区作为上述代码的输入(替换你的stream)。

这是完整的代码,包括测试代码:

import ffmpeg
import numpy as np
import cv2
import io

in_filename = 'in.avi'

# Build synthetic video, for testing begins:
###############################################
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=160x120:rate=1 -c:v libx264 -t 5 in.mp4
width, height = 160, 120

(
    ffmpeg
    .input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
    .output(in_filename, vcodec='libx264', crf=23, t=5)
    .overwrite_output()
    .run()
)
###############################################


# Use ffprobe to get video frames resolution
###############################################
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
n_frames = int(p['streams'][0]['nb_frames'])
###############################################


# Stream the entire video as one large array of bytes
###############################################
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
in_bytes, _ = (
    ffmpeg
    .input(in_filename)
    .video # Video only (no audio).
    .output('pipe:', format='h264', crf=23)
    .run(capture_stdout=True) # Run asynchronous, and stream to stdout
)
###############################################


# Open In-memory binary streams
stream = io.BytesIO(in_bytes)

# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The ouptut format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
    ffmpeg
    .input('pipe:')
    .video
    .output('pipe:', format='rawvideo', pix_fmt='bgr24')
    .run_async(pipe_stdin=True, pipe_stdout=True)
)


# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue())  # Write stream content to the pipe
process.stdin.close()  # close stdin (flush and send EOF)


#Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
    # Read raw video frame from stdout as bytes array.
    in_bytes = process.stdout.read(width * height * 3)

    if not in_bytes:
        break

    # transform the byte read into a numpy array
    in_frame = (
        np
        .frombuffer(in_bytes, np.uint8)
        .reshape([height, width, 3])
    )

    #Display the frame
    cv2.imshow('in_frame', in_frame)

    if cv2.waitKey(100) & 0xFF == ord('q'):
        break

process.wait()
cv2.destroyAllWindows()
于 2020-01-31T23:27:55.837 回答
1

我不知道你到底想做什么,但没有 FFMPEG 的另一种方法是:

如果您阅读 picam 文档,您会看到视频端口有分离器,您可以使用splitter_port=x (1<=x<=3) kwarg for camera 访问这些分离器。start_recorder()
https ://picamera.readthedocs.io/en/release-1.13/api_camera.html#picamera.PiCamera.start_recording

基本上,这意味着您可以将录制的流拆分为 2 个子流,一个编码为 h264 以进行保存或其他,另一个将其编码为 OPENCV 兼容格式。https://picamera.readthedocs.io/en/release-1.13/recipes2.html?highlight=splitter#capturing-to-an-opencv-object

这一切主要发生在 GPU 中,因此速度非常快(有关更多信息,请参阅 picamera 文档)

如果您需要一个示例,这与他们在这里所做的相同: https://picamera.readthedocs.io/en/release-1.13/recipes2.html?highlight=splitter#recording-at-multiple-resolutions 但随后使用 opencv对象和一个 h264 流

于 2020-07-21T05:09:44.607 回答
1

我认为 OpenCV 不知道如何解码 H264,因此您必须依靠其他库将其转换为 RGB 或 BGR。

另一方面,您可以format='bgr'picamera中使用,让您的生活更轻松:

于 2020-01-31T11:29:19.643 回答
0

@Rotem 的答案是正确的,但它不适用于大视频块。

process.stdin.write要处理更大的视频,我们需要替换process.communicate. 更新以下行

...
# process.stdin.write(stream.getvalue())  # Write stream content to the pipe
outs, errs = process.communicate(input=stream.getvalue())
# process.stdin.close()  # close stdin (flush and send EOF)
# Read decoded video (frame by frame), and display each frame (using cv2.imshow)

position = 0
ct = time.time()
while(True):
    # Read raw video frame from stdout as bytes array.
    in_bytes = outs[position: position + width * height * 3]
    position += width * height * 3
...
于 2021-04-03T04:49:13.103 回答