0

所以我有一组带有彩色面具的图片,例如蓝色代表椅子,红色代表灯等。

由于我对这一切都不熟悉,因此我尝试使用 unet 模型执行此操作,我已经使用 keras 处理了图像,就像这样。

def data_generator(img_path,mask_path,batch_size):
    c=0
    n = os.listdir(img_path)
    m = os.listdir(mask_path)
    random.shuffle(n)
    while(True):
        img = np.zeros((batch_size,256,256,3)).astype("float")
        mask = np.zeros((batch_size,256,256,1)).astype("float")

        for i in range(c,c+batch_size):
            train_img = cv2.imread(img_path+"/"+n[i])/255.
            train_img = cv2.resize(train_img,(256,256))
            img[i-c] = train_img

            train_mask = cv2.imread(mask_path+"/"+m[i],cv2.IMREAD_GRAYSCALE)/255.
            train_mask = cv2.resize(train_mask,(256,256))
            train_mask = train_mask.reshape(256,256,1)

            mask[i-c]=train_mask

        c+=batch_size
        if(c+batch_size>=len(os.listdir(img_path))):
            c=0
            random.shuffle(n)

        yield img,mask

现在仔细看,我认为这种方式不适用于我的面具,我尝试将面具处理为 rgb 颜色,但我的模型不会像那样训练。

模型。

def unet(pretrained_weights = None,input_size = (256,256,3)):
    inputs = Input(input_size)
    conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
    conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
    conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
    conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
    conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
    conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
    conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
    conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
    drop4 = Dropout(0.5)(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)

    conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
    conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
    drop5 = Dropout(0.5)(conv5)

    up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
    merge6 = concatenate([drop4,up6], axis = 3)
    conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
    conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)

    up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
    merge7 = concatenate([conv3,up7], axis = 3)
    conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
    conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)

    up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
    merge8 = concatenate([conv2,up8], axis = 3)
    conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
    conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)

    up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
    merge9 = concatenate([conv1,up9], axis = 3)
    conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
    conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)

    model = Model(input = inputs, output = conv10)

    model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])

    #model.summary()

    if(pretrained_weights):
        model.load_weights(pretrained_weights)

    return model

所以我的问题是如何训练带有彩色图像蒙版的模型。

编辑,我拥有的数据示例。

给定图像来训练模型 给定图像训练模型的示例

它的面具 图像掩码

以及每个这样的面具的百分比。 {"water": 4.2, "building": 33.5, "road": 0.0}

4

1 回答 1

1

在语义分割问题中,每个像素属于任何目标输出类/标签。因此,您的输出层conv10应该将类的总数 (n_classes) 作为 no._of_kernels 的值和softmax如下所示的激活函数:

conv10 = Conv2D(**n_classes**, 1, activation = 'softmax')(conv9)

categorical_crossentropy在这种情况下,在编译 u-net 模型时也应该将 loss 更改为。

model.compile(optimizer = Adam(lr = 1e-4), loss = 'categorical_crossentropy', metrics = ['accuracy'])

此外,您不应该标准化您的真实标签/蒙版图像,而是可以编码如下:

train_mask = np.zeros((height, width, n_classes))
for c in range(n_classes):
    train_mask[:, :, c] = (img == c).astype(int)

[我假设您有两个以上的真实输出类/标签,因为您提到您的面具包含水、道路、建筑物等的不同颜色;如果你只有两个类,那么你的模型配置很好,除了 train_mask 处理。]

于 2019-10-02T01:54:32.900 回答