1

我是 openCV - CUDA 的新手,所以我一直在测试最简单的一个,它在 GPU 而不是 CPU 上加载模型,以查看 GPU 的速度有多快,我对得到的结果感到震惊。

----------------------------------------------------------------
---         GPU                vs             CPU            ---
---                                                          ---
--- 21.913758993148804 seconds ---3.0586464405059814 seconds ---
--- 22.379303455352783 seconds ---3.1384341716766357 seconds ---
--- 21.500431060791016 seconds ---2.9400241374969482 seconds ---
--- 21.292986392974854 seconds ---3.3738017082214355 seconds ---
--- 20.88358211517334 seconds  ---3.388749599456787 seconds  ---

我会给出我的代码片段,以防我做错了导致 GPU 时间飙升的问题。

def loadYolo():
    net = cv2.dnn.readNet("yolov4.weights", "yolov4.cfg")
    
    net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
    net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)

    classes = []
    with open("coco.names", "r") as f:
        classes = [line.strip() for line in f.readlines()]

    layer_names = net.getLayerNames()
    output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
    return net,classes,layer_names,output_layers


@socketio.on('image')
def image(data_image):

    sbuf = StringIO()
    sbuf.write(data_image)
    
    b = io.BytesIO(base64.b64decode(data_image))
    if(str(data_image) == 'data:,'):
        pass
    else:
        pimg = Image.open(b)
    
        frame = cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
        frame = resize(frame, width=700)
        frame = cv2.flip(frame, 1)
    
        net,classes,layer_names,output_layers=loadYolo()
        height, width, channels = frame.shape

        
        blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
        swapRB=True, crop=False)

       
        net.setInput(blob)
        outs = net.forward(output_layers)
        print("--- %s seconds ---" % (time.time() - start_time))
        
        
        class_ids = []
        confidences = []
        boxes = []
        for out in outs:
            for detection in out:
                scores = detection[5:]
                class_id = np.argmax(scores)
                confidence = scores[class_id]
                if confidence > 0.5:
                    # Object detected
                    center_x = int(detection[0] * width)
                    center_y = int(detection[1] * height)
                    w = int(detection[2] * width)
                    h = int(detection[3] * height)

                    # Rectangle coordinates
                    x = int(center_x - w / 2)
                    y = int(center_y - h / 2)

                    boxes.append([x, y, w, h])
                    confidences.append(float(confidence))
                    class_ids.append(class_id)

        indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
        font = cv2.FONT_HERSHEY_PLAIN
        colors = np.random.uniform(0, 255, size=(len(classes), 3))
        for i in range(len(boxes)):
            if i in indexes:
                x, y, w, h = boxes[i]
                label = str(classes[class_ids[i]])
                color = colors[class_ids[i]]
                cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
                cv2.putText(frame, label, (x, y + 30), font, 1, color, 2)
    
        imgencode = cv2.imencode('.jpg', frame)[1]

        stringData = base64.b64encode(imgencode).decode('utf-8')
        b64_src = 'data:image/jpg;base64,'
        stringData = b64_src + stringData
        emit('response_back', stringData)

我的 Gpu 是 Nvidia 1050 Ti,我的 CPU 是 i5 gen 9,以防有人需要规格。有人可以启发我,因为我现在非常困惑吗?非常感谢

编辑 1:我尝试使用 cv2.dnn.DNN_TARGET_CUDA 而不是 cv2.dnn.DNN_TARGET_CUDA_FP16,但与 CPU 相比,时间仍然很糟糕。以下是 GPU 结果:

--- 10.91195559501648 seconds ---
--- 11.344025135040283 seconds ---
--- 11.754926204681396 seconds ---
--- 12.779674530029297 seconds ---

以下是 CPU 结果:

--- 4.780993223190308 seconds ---
--- 4.910650253295898 seconds ---
--- 4.990436553955078 seconds ---
--- 5.246175050735474 seconds ---

它仍然比 CPU 慢

编辑 2: OpenCv 是 4.5.0、CUDA 11.1 和 CUDNN 8.0.1

4

3 回答 3

1

你绝对应该只加载一次 YOLO。为通过套接字的每个图像重新创建它对于 CPU 和 GPU 来说都很慢,但是 GPU 需要更长的时间来初始加载,这就是为什么你看到它的运行速度比 CPU 慢。

我不明白您对 YOLO 模型使用 LRU 缓存是什么意思。如果没有看到您的其余代码结构,我无法提出任何真正的建议,但是您是否可以尝试至少暂时将网络放入全局空间以查看它是否运行得更快?(完全删除该功能并将其主体放在全局空间中)

像这样的东西

net = cv2.dnn.readNet("yolov4.weights", "yolov4.cfg")

net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)

classes = []
with open("coco.names", "r") as f:
    classes = [line.strip() for line in f.readlines()]

layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]


@socketio.on('image')
def image(data_image):

    sbuf = StringIO()
    sbuf.write(data_image)
    
    b = io.BytesIO(base64.b64decode(data_image))
    if(str(data_image) == 'data:,'):
        pass
    else:
        pimg = Image.open(b)
    
        frame = cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
        frame = resize(frame, width=700)
        frame = cv2.flip(frame, 1)
    
        height, width, channels = frame.shape

        
        blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
        swapRB=True, crop=False)

       
        net.setInput(blob)
        outs = net.forward(output_layers)
        print("--- %s seconds ---" % (time.time() - start_time))
        
        
        class_ids = []
        confidences = []
        boxes = []
        for out in outs:
            for detection in out:
                scores = detection[5:]
                class_id = np.argmax(scores)
                confidence = scores[class_id]
                if confidence > 0.5:
                    # Object detected
                    center_x = int(detection[0] * width)
                    center_y = int(detection[1] * height)
                    w = int(detection[2] * width)
                    h = int(detection[3] * height)

                    # Rectangle coordinates
                    x = int(center_x - w / 2)
                    y = int(center_y - h / 2)

                    boxes.append([x, y, w, h])
                    confidences.append(float(confidence))
                    class_ids.append(class_id)

        indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
        font = cv2.FONT_HERSHEY_PLAIN
        colors = np.random.uniform(0, 255, size=(len(classes), 3))
        for i in range(len(boxes)):
            if i in indexes:
                x, y, w, h = boxes[i]
                label = str(classes[class_ids[i]])
                color = colors[class_ids[i]]
                cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
                cv2.putText(frame, label, (x, y + 30), font, 1, color, 2)
    
        imgencode = cv2.imencode('.jpg', frame)[1]

        stringData = base64.b64encode(imgencode).decode('utf-8')
        b64_src = 'data:image/jpg;base64,'
        stringData = b64_src + stringData
        emit('response_back', stringData)
于 2021-04-27T14:50:48.460 回答
1

从前两个答案中,我设法改变了解决方案:

net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)

进入 :

net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

由于我的 GPU 类型与 FP16 不兼容,有助于将 GPU 速度提高一倍,这要归功于 Amir Karami,尽管 Ian Chu 的回答并没有解决我的问题,但它为我提供了强制使所有图像仅使用一个网络实例的基础这实际上将处理时间从每个需要 10 秒显着降低到 0.03-0.04 秒,从而超过 CPU 速度很多倍。我不接受这两个答案的原因是因为两者都没有真正解决我的问题,但两者都成为我解决方案的坚实基础,所以我仍然支持它们。我只是在这里留下我的答案,以防有人遇到像我这样的问题。

于 2021-04-30T04:19:42.313 回答
1

DNN_TARGET_CUDA_FP16指 16 位浮点数。由于您的 gpu 是 1050 Ti,因此您的 gpu 似乎不适用于 FP16。您可以从这里检查它并从这里检查您的计算能力。我认为你应该改变这一行:

net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)

进入 :

net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
于 2021-04-27T04:26:08.203 回答