我找到了一个使用ffmpeg-python的解决方案。
我无法验证 raspberry-pi 中的解决方案,所以我不确定它是否适合您。
假设:
stream
将整个捕获的 h264 流保存在内存缓冲区中。
- 您不想将流写入文件。
该解决方案适用以下内容:
- 在作为输入和输出
FFmpeg
的子流程中执行。
输入将是视频流(内存缓冲区)。
输出格式是 BGR 像素格式的原始视频帧。 sdtin
pipe
stdout
pipe
- 将流内容写入
pipe
(to stdin
)。
- 读取解码的视频(逐帧),并显示每一帧(使用
cv2.imshow
)
这是代码:
import ffmpeg
import numpy as np
import cv2
import io
width, height = 640, 480
# Seek to stream beginning
stream.seek(0)
# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The output format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
ffmpeg
.input('pipe:')
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue()) # Write stream content to the pipe
process.stdin.close() # close stdin (flush and send EOF)
#Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
# Read raw video frame from stdout as bytes array.
in_bytes = process.stdout.read(width * height * 3)
if not in_bytes:
break
# transform the byte read into a numpy array
in_frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
#Display the frame
cv2.imshow('in_frame', in_frame)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
process.wait()
cv2.destroyAllWindows()
注意:我使用sdtin
andstdout
作为管道(而不是使用命名管道),因为我希望代码也可以在 Windows 中工作。
为了测试解决方案,我创建了一个示例视频文件,并将其读入内存缓冲区(编码为 H.264)。
我使用内存缓冲区作为上述代码的输入(替换你的stream
)。
这是完整的代码,包括测试代码:
import ffmpeg
import numpy as np
import cv2
import io
in_filename = 'in.avi'
# Build synthetic video, for testing begins:
###############################################
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=160x120:rate=1 -c:v libx264 -t 5 in.mp4
width, height = 160, 120
(
ffmpeg
.input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
.output(in_filename, vcodec='libx264', crf=23, t=5)
.overwrite_output()
.run()
)
###############################################
# Use ffprobe to get video frames resolution
###############################################
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
n_frames = int(p['streams'][0]['nb_frames'])
###############################################
# Stream the entire video as one large array of bytes
###############################################
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
in_bytes, _ = (
ffmpeg
.input(in_filename)
.video # Video only (no audio).
.output('pipe:', format='h264', crf=23)
.run(capture_stdout=True) # Run asynchronous, and stream to stdout
)
###############################################
# Open In-memory binary streams
stream = io.BytesIO(in_bytes)
# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The ouptut format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
ffmpeg
.input('pipe:')
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue()) # Write stream content to the pipe
process.stdin.close() # close stdin (flush and send EOF)
#Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
# Read raw video frame from stdout as bytes array.
in_bytes = process.stdout.read(width * height * 3)
if not in_bytes:
break
# transform the byte read into a numpy array
in_frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
#Display the frame
cv2.imshow('in_frame', in_frame)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
process.wait()
cv2.destroyAllWindows()