21

我最近设置了一个 Raspberry Pi 相机,并通过 RTSP 流式传输帧。虽然它可能不是完全必要的,但这是我使用广播视频的命令:

raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264

这完美地流式传输视频。

我现在想做的是用 Python 解析这个流并单独读取每一帧。我想做一些用于监视目的的运动检测。

我完全不知道从哪里开始这项任务。谁能指点我一个好的教程?如果这不能通过 Python 实现,我可以使用哪些工具/语言来实现这一点?

4

6 回答 6

24

使用“depu”列出的相同方法对我来说非常有效。我只是用实际相机的“RTSP URL”替换了“视频文件”。下面的示例适用于 AXIS IP 摄像机。(这在以前版本的 OpenCV 中暂时不起作用)适用于 OpenCV 3.4.1 Windows 10)

import cv2
cap = cv2.VideoCapture("rtsp://root:pass@192.168.0.91:554/axis-media/media.amp")

while(cap.isOpened()):
    ret, frame = cap.read()
    cv2.imshow('frame', frame)
    if cv2.waitKey(20) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()
于 2018-09-20T15:04:32.110 回答
18

有点骇人听闻的解决方案,但您可以使用VLC python 绑定(您可以使用 安装它pip install python-vlc)并播放流:

import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()

然后每隔一秒左右拍一张快照:

while 1:
    time.sleep(1)
    player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)

然后您可以使用SimpleCV或其他东西进行处理(只需将图像文件加载'.snapshot.tmp.png'到您的处理库中)。

于 2014-01-10T19:22:00.127 回答
8

使用opencv

video=cv2.VideoCapture("rtsp url")

然后你可以捕获帧。阅读 openCV 文档访问:https ://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html

于 2018-04-16T14:36:13.290 回答
2

根据流类型,您可能可以查看这个项目以获得一些想法。

https://code.google.com/p/python-mjpeg-over-rtsp-client/

如果你想成为超级专业人士,你可以使用http://opencv.org/(我相信 Python 模块可用)之类的东西来处理运动检测。

于 2013-07-31T12:10:24.760 回答
2

这里还有一个选择

它比其他答案复杂得多。:-O

但是通过这种方式,只需一个连接到相机,您就可以将同一流同时“分叉”到多个多进程、屏幕、将其重播为多播、将其写入磁盘等。

..当然,以防万一您需要类似的东西(否则您更喜欢较早的答案)

让我们创建两个独立的python程序:

(1)服务器程序(rtsp连接、解码)server.py

(2) 客户端程序(从共享内存中读取帧)client.py

服务器必须在客户端之前启动,即

python3 server.py

然后在另一个终端:

python3 client.py

这是代码:

(1)服务器.py

import time
from valkka.core import *

# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000  
# define rgb image dimensions
width  =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name    ="cam_example" 
shmem_buffers =10 

shmem_filter    =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter      =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)

avthread        =AVThread("avthread",interval_filter)
av_in_filter    =avthread.getFrameFilter()
livethread      =LiveThread("livethread")

ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password@192.168.x.x", 1, av_in_filter)

avthread.startCall()
livethread.startCall()

avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)

# all those threads are written in cpp and they are running in the
# background.  Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)

# stop threads
livethread.stopCall()
avthread.stopCall()

print("bye") 

(2)客户端.py

import cv2
from valkka.api2 import ShmemRGBClient

width  =1920//4
height =1080//4

# This identifies posix shared memory - must be same as in the server side
shmem_name    ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10              

client=ShmemRGBClient(
name          =shmem_name,
n_ringbuffer  =shmem_buffers,
width         =width,
height        =height,
mstimeout     =1000,        # client timeouts if nothing has been received in 1000 milliseconds
verbose       =False
) 

while True:
index, isize = client.pull()
if (index==None):
    print("timeout")
else:
    data =client.shmem_list[index][0:isize]
    img =data.reshape((height,width,3))
    img =cv2.GaussianBlur(img, (21, 21), 0)
    cv2.imshow("valkka_opencv_demo",img)
    cv2.waitKey(1)

如果您有兴趣,请在https://elsampsa.github.io/valkka-examples/中查看更多内容

于 2018-11-15T14:07:35.743 回答
-1

嗨,从视频中读取帧可以使用 python 和 OpenCV 来实现。下面是示例代码。适用于 python 和 opencv2 版本。

import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)

dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("TestVideo.mp4")
count = 0
while(cap.isOpened()):
    ret, frame = cap.read()
    if not ret:
        break
    else:
        cv2.imshow('frame', frame)
        #The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
        name = "rec_frame"+str(count)+".jpg"
        cv2.imwrite(os.path.join(dirname,name), frame)
        count += 1
    if cv2.waitKey(20) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()
于 2016-07-04T11:06:14.193 回答