我的网络中有三个节点:dataServer --- node1 --- node2。我的视频数据“friends.mp4”保存在 dataServer 上。我将 dataServer 和 node2 都作为 rtmp-nginx 服务器启动。我在 node1 上使用 ffmpeg 在 dataServer 上提取数据流,并将转换后的数据流推送到 node2 上的“实时”应用程序。这是我为 node2 配置的 nginx.conf。
worker_processes 1;
events {
worker_connections 1024;
}
rtmp {
server {
listen 1935;
chunk_size 4000;
application play {
play /usr/local/nginx/html/play;
}
application hls {
live on;
hls on;
hls_path /usr/local/nginx/html/hls;
hls_fragment 1s;
hls_playlist_length 4s;
}
application live
{
live on;
allow play all;
}
}
}
我想运行这个python代码来识别friends.mp4中的面孔:import cv2
vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
print("Error opening the video file")
else:
fps = vid_capture.get(5)
print("Frames per second : ", fps,'FPS')
frame_count = vid_capture.get(7)
print('Frame count : ', frame_count)
while(vid_capture.isOpened()):
ret, frame = vid_capture.read()
if ret == True:
gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
for x, y, w, h in face_zone:
cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
cv2.imshow('Frame', frame)
key = cv2.waitKey(50)
if key == ord('q'):
break
else:
break
vid_capture.release()
cv2.destoryAllWindows()
但我不能这样做,因为 cv2.VideoCapture 无法从“rtmp://127.0.0.1:1935/live”获取数据流。也许是因为这个路径不是文件。如何获取 nginx 服务器接收的视频流并将其放入我的 openCV 模型?有没有办法让我只访问 niginx 服务器接收到的 dataStreaming 并使其成为 openCV 可以使用的 python 对象?