我找不到使用 PyAV 的解决方案,而是使用ffmpeg-python。
ffmpeg-python是FFmpeg的 Pythonic 绑定,例如PyAV。
该代码一次将整个视频读入灰度帧的 3D Numpy 数组。
该解决方案执行以下步骤:
- 创建一个输入视频文件(用于测试)。
- 使用“探针”获取视频文件的分辨率。
- 将视频流式传输到字节数组中。
- 将字节数组重塑为
n x height x width
numpy 数组。
- 显示第一帧(用于测试)。
这是代码(请阅读评论):
import ffmpeg
import numpy as np
from PIL import Image
in_filename = 'in.avi'
"""Build synthetic video, for testing begins:"""
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=160x120:rate=1 -c:v libx264 -t 5 in.mp4
width, height = 160, 120
(
ffmpeg
.input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
.output(in_filename, vcodec='libx264', t=5)
.overwrite_output()
.run()
)
"""Build synthetic video ends"""
# Use ffprobe to get video frames resolution
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# Stream the entire video as one large array of bytes
in_bytes, _ = (
ffmpeg
.input(in_filename)
.video # Video only (no audio).
.output('pipe:', format='rawvideo', pix_fmt='gray') # Set the output format to raw video in 8 bit grayscale
.run(capture_stdout=True)
)
n_frames = len(in_bytes) // (height*width) # Compute the number of frames.
frames = np.frombuffer(in_bytes, np.uint8).reshape(n_frames, height, width) # Reshape buffer to array of n_frames frames (shape of each frame is (height, width)).
im = Image.fromarray(frames[0, :, :]) # Convert first frame to image object
im.show() # Display the image
输出: