5

我正在尝试将一个非常大的图像输入 Triton 服务器。我需要将输入图像分成补丁,并将补丁一一输入到张量流模型中。图像具有可变大小,因此每次调用的补丁数 N 都是可变的。

我认为调用以下步骤的 Triton 集成模型可以完成这项工作:

  1. 用于创建补丁的 python 模型(预处理)
  2. 细分模型
  3. 最后是另一个 python 模型(后处理)将输出补丁合并到一个大的输出掩码中

但是,为此,我必须编写一个具有和关系的config. pbtxt文件,这意味着集成调度程序需要多次调用第二步,并使用聚合输出调用第三步。1:NN:1

这可能吗,还是我需要使用其他技术?

4

1 回答 1

0

免责声明

以下答案不是上述问题的实际解决方案。我误解了上面的查询。但我会留下这个回复,以防未来的读者觉得它有用。


输入

import cv2 
import matplotlib.pyplot as plt

input_img = cv2.imread('/content/2.jpeg')
print(input_img.shape) # (719, 640, 3)
plt.imshow(input_img) 

切片和缝合

从这里采用以下功能。更多细节和讨论可以在这里找到。. 除了原始代码之外,我们还汇集了必要的功能并将它们放在一个类中(ImageSliceRejoin)。

# ref: https://github.com/idealo/image-super-resolution
class ImageSliceRejoin:
    def pad_patch(self, image_patch, padding_size, channel_last=True):
        """ Pads image_patch with padding_size edge values. """
        if channel_last:
            return np.pad(
                image_patch,
                ((padding_size, padding_size), 
                (padding_size, padding_size), (0, 0)),
                'edge',
            )
        else:
            return np.pad(
                image_patch,
                ((0, 0), (padding_size, padding_size), (padding_size, padding_size)),
                'edge',
            )

    # function to split the image into patches        
    def split_image_into_overlapping_patches(self, image_array, patch_size, padding_size=2):
        """ Splits the image into partially overlapping patches.
        The patches overlap by padding_size pixels.
        Pads the image twice:
            - first to have a size multiple of the patch size,
            - then to have equal padding at the borders.
        Args:
            image_array: numpy array of the input image.
            patch_size: size of the patches from the original image (without padding).
            padding_size: size of the overlapping area.
        """
        xmax, ymax, _ = image_array.shape
        x_remainder = xmax % patch_size
        y_remainder = ymax % patch_size
        
        # modulo here is to avoid extending of patch_size instead of 0
        x_extend = (patch_size - x_remainder) % patch_size
        y_extend = (patch_size - y_remainder) % patch_size
        
        # make sure the image is divisible into regular patches
        extended_image = np.pad(image_array, ((0, x_extend), (0, y_extend), (0, 0)), 'edge')
        
        # add padding around the image to simplify computations
        padded_image = self.pad_patch(extended_image, padding_size, channel_last=True)
        
        xmax, ymax, _ = padded_image.shape
        patches = []
        
        x_lefts = range(padding_size, xmax - padding_size, patch_size)
        y_tops = range(padding_size, ymax - padding_size, patch_size)
        
        for x in x_lefts:
            for y in y_tops:
                x_left = x - padding_size
                y_top = y - padding_size
                x_right = x + patch_size + padding_size
                y_bottom = y + patch_size + padding_size
                patch = padded_image[x_left:x_right, y_top:y_bottom, :]
                patches.append(patch)
        
        return np.array(patches), padded_image.shape

    # joing the patches 
    def stich_together(self, patches, padded_image_shape, target_shape, padding_size=4):
        """ Reconstruct the image from overlapping patches.
        After scaling, shapes and padding should be scaled too.
        Args:
            patches: patches obtained with split_image_into_overlapping_patches
            padded_image_shape: shape of the padded image contructed in split_image_into_overlapping_patches
            target_shape: shape of the final image
            padding_size: size of the overlapping area.
        """
        xmax, ymax, _ = padded_image_shape

        # unpad patches
        patches = patches[:, padding_size:-padding_size, padding_size:-padding_size, :]

        patch_size = patches.shape[1]
        n_patches_per_row = ymax // patch_size
        complete_image = np.zeros((xmax, ymax, 3))

        row = -1
        col = 0
        for i in range(len(patches)):
            if i % n_patches_per_row == 0:
                row += 1
                col = 0
            complete_image[
            row * patch_size: (row + 1) * patch_size, col * patch_size: (col + 1) * patch_size, :
            ] = patches[i]
            col += 1
        return complete_image[0: target_shape[0], 0: target_shape[1], :]

开始切片

import numpy as np 

isr = ImageSliceRejoin()
padding_size = 1

patches, p_shape = isr.split_image_into_overlapping_patches(
    input_img, 
    patch_size=220, 
    padding_size=padding_size
)

patches.shape, p_shape, input_img.shape
((12, 222, 222, 3), (882, 662, 3), (719, 640, 3))

核实

n = np.ceil(patches.shape[0] / 2)
plt.figure(figsize=(20, 20))
patch_size = patches.shape[1]

for i in range(patches.shape[0]):
    patch = patches[i] 
    ax = plt.subplot(n, n, i + 1)
    patch_img = np.reshape(patch, (patch_size, patch_size, 3))
    plt.imshow(patch_img.astype("uint8"))
    plt.axis("off")

在此处输入图像描述

推理

我正在使用Image-Super-Resolution模型进行演示。

# import model
from ISR.models import RDN
model = RDN(weights='psnr-small')

# number of patches that will pass to model for inference: 
# here, batch_size < len(patches)
batch_size = 2

for i in range(0, len(patches), batch_size):
    # get some patches
    batch = patches[i: i + batch_size]

    # pass them to model to give patches output 
    batch = model.model.predict(batch)

    # save the output patches 
    if i == 0:
        collect = batch
    else:
        collect = np.append(collect, batch, axis=0)

现在,collect保存模型中每个补丁的输出。

patches.shape, collect.shape
((12, 222, 222, 3), (12, 444, 444, 3))

重新加入补丁

scale = 2
padded_size_scaled = tuple(np.multiply(p_shape[0:2], scale)) + (3,)
scaled_image_shape = tuple(np.multiply(input_img.shape[0:2], scale)) + (3,)

sr_img = isr.stich_together(
    collect,
    padded_image_shape=padded_size_scaled,
    target_shape=scaled_image_shape,
    padding_size=padding_size * scale,
)

核实

print(input_img.shape, sr_img.shape)
# (719, 640, 3) (1438, 1280, 3)

fig, ax = plt.subplots(1,2)
fig.set_size_inches(18.5, 10.5)
ax[0].imshow(input_img)
ax[1].imshow(sr_img.astype('uint8'))

在此处输入图像描述

于 2021-05-07T13:58:54.997 回答