如何通过tcp协议将人工智能处理的实时视频传输到其他平台。我将使用计算机作为服务器来接收视频,并使用树莓派作为客户端来传输视频。
我使用OpenCV在tcp协议中实现实时视频传输,有完整可行的物体识别码(来自https://github.com/PINTO0309/MobileNet-SSD-RealSense,感谢工程师PINTO0309)。我在电脑端(Windows系统)接收视频,在树莓派中进行物体识别和视频传输。
服务器
import socket,time,cv2,numpy
def ReceiveVideo():
address = ('193.169.4.155', 50000)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(address)
s.listen(1)
def recvall(sock, count):
buf = b''
while count:
newbuf = sock.recv(count)
if not newbuf: return None
buf += newbuf
count -= len(newbuf)
return buf
conn, addr = s.accept()
print('connect from:'+str(addr))
while 1:
start = time.time()
length = recvall(conn,16)
stringData = recvall(conn, int(length))
data = numpy.frombuffer(stringData, numpy.uint8)
decimg=cv2.imdecode(data,cv2.IMREAD_COLOR)
cv2.imshow('SERVER',decimg)
end = time.time()
seconds = end - start
fps = 1/seconds;
conn.send(bytes(str(int(fps)),encoding='utf-8'))
k = cv2.waitKey(10)&0xff
if k == 27:
break
s.close()
cv2.destroyAllWindows()
if __name__ == '__main__':
ReceiveVideo()
客户
import socket,cv2,numpy,time,sys
def SendVideo():
address = ('193.169.4.155', 50000)
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect(address)
global capture
capture = cv2.VideoCapture(0)
ret, frame = capture.read()
encode_param=[int(cv2.IMWRITE_JPEG_QUALITY),15]
while ret:
time.sleep(0.01)
result, imgencode = cv2.imencode('.jpg', frame , encode_param)
data = numpy.array(imgencode)
stringData = data.tostring()
sock.send(str.encode(str(len(stringData)).ljust(16)));
sock.send(stringData);
receive = sock.recv(1024)
if len(receive):print(str(receive,encoding='utf-8'))
ret, frame = capture.read()
if cv2.waitKey(10) == 27:
break
sock.close()
if __name__ == '__main__':
SendVideo()
多次修改代码后,我遇到了以下问题。
- 传输后的显卡在第一帧。
- 无法传输,树莓派显示错误为
[Errno:32]Broken pipe
,pc端显示错误。
OpenCV(3.4.2) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:356: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
我认为第二个错误意味着没有得到图片。
我终于想实现将识别出来的视频实时传输到电脑上。