2

我正在尝试实现对象检测 Gazebo 模拟环境。为此,我使用https://dev.px4.io/v1.9.0/en/simulation/gazebo.html这个站点。我正在接收来自 QGroundControl 的视频。此外,我通过 Python 从 USB 网络摄像头接收视频和检测对象。现在,我想从 python Opencv videocapture 函数接收 gstreamer UDP 视频。但它给出了错误:

  File "gstreamer_try.py", line 120, in <module>
    feed_dict={image_tensor: image_np_expanded})
  File "/home/hanco/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/home/hanco/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1149, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
  File "/home/hanco/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py", line 85, in asarray
    return array(a, dtype, copy=False, order=order)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

首先,我尝试编写 gstreamer 发送者和接收者一些视频:

gstreamer_sender.py:

import socket
import numpy as np
import cv2 as cv


addr = ("127.0.0.1", 5655)
buf = 512
width = 640
height = 480
cap = cv.VideoCapture("/home/hanco/Desktop/duckduck.mp4")
cap.set(3, width)
cap.set(3, height)
code = 'start'
code = ('start' + (buf - len(code)) * 'a').encode('utf-8')


if __name__ == '__main__':
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    while(cap.isOpened()):
        ret, frame = cap.read()
        if ret:
            s.sendto(code, addr)
            data = frame.tostring()
            for i in range(0, len(data), buf):
                s.sendto(data[i:i+buf], addr)
            # cv.imshow('send', frame)
            # if cv.waitKey(1) & 0xFF == ord('q'):
                # break
        else:
            break
    # s.close()
    # cap.release()
    # cv.destroyAllWindows()

gstreamer_receiver.py:

import socket
import numpy as np
import cv2 as cv


addr = ("127.0.0.1", 5600)
buf = 512
width = 640
height = 480
code = b'start'
num_of_chunks = width * height * 3 / buf

if __name__ == '__main__':
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    s.bind(addr)
    chunk, _ = s.recvfrom(buf)
    chunk.startswith(code)
    start = True
    while True:
        chunks = []
        while len(chunks) < num_of_chunks:
            chunk, _ = s.recvfrom(buf)
            if start:
                chunks.append(chunk)

        byte_frame = b''.join(chunks)

        frame = np.frombuffer(byte_frame, dtype=np.uint8).reshape(width, height, 3)

        cv.imshow('recv', frame)
        if cv.waitKey(1) & 0xFF == ord('q'):
            break

    s.close()
    cv.destroyAllWindows()

它运行良好,我意识到我可以通过仅更改端口来接收来自 Gazebo 的视频。Gazebo 默认端口是5600。但它没有用。

我只想通过使用上面的代码在这里实现:

cap = cv2.VideoCapture(GSTREAMEAR_VIDEO_INPUT)

它在这一行给出错误:

          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)
4

1 回答 1

0

以下代码正在运行,没有任何错误。

# Read video
video = cv2.VideoCapture("udpsrc port=5600 ! application/x-rtp,payload=96,encoding-name=H264 ! rtpjitterbuffer mode=1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! appsink", cv2.CAP_GSTREAMER);
于 2019-12-13T07:44:19.420 回答