2

我一直在尝试通过我的 vm 实例上的控制台将我的模型部署到 AI 平台进行预测,但我收到错误“(gcloud.beta.ai-platform.versions.create)创建版本失败。模型错误检测到错误:“加载模型失败:加载模型时出现意外错误:预测器中的问题 - ModuleNotFoundError:没有名为‘torchvision’的模块(错误代码:0)”

我需要同时包含torchtorchvision。我按照此问题中的步骤无法使用自定义预测例程将训练模型部署到 Google Cloud Ai-Platform: Model requires more memory than allowed,但我无法获取用户 gogasca 指向的文件。我尝试从 Pytorch 网站下载这个.whl 文件并将其上传到我的云存储,但得到了相同的错误,即没有模块torchvision,即使这个版本应该包括 torch 和 torchvision。还尝试在此处使用与 Cloud AI 兼容的软件包,但它们不包括torchvision.

我尝试在参数中指向两个单独的 .whl 文件torch,这些文件指向我的云存储中的文件,但随后出现超出内存容量的错误。这很奇怪,因为它们的总大小约为 130Mb。我的命令导致缺少的示例如下所示:torchvision--package-uristorchvision

gcloud beta ai-platform versions create version_1 \
  --model online_pred_1 \
  --runtime-version 1.15 \
  --python-version 3.7 \
  --origin gs://BUCKET/model-dir \
  --package-uris gs://BUCKET/staging-dir/my_package-0.1.tar.gz,gs://BUCKET/torchvision-dir/torch-1.4.0+cpu-cp37-cp37m-linux_x86_64.whl \
  --prediction-class predictor.MyPredictor

我尝试指向从不同来源获得的 .whl 文件的不同组合,但出现无模块错误或内存不足。我不明白在这种情况下模块如何交互以及为什么编译器认为没有这样的模块。我该如何解决这个问题?或者,我如何自己编译一个包含torchtorchvision. 您能否给出详细的答案,因为我对包管理和 bash 脚本不是很熟悉。

这是我使用的代码torch_model.py

from torch import nn


class EthnicityClassifier44(nn.Module):
    def __init__(self, num_classes=2):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=1, padding=3)
        self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.conv22 = nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)
        self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
        self.maxpool3 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.conv4 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
        self.maxpool4 = nn.MaxPool2d(kernel_size=2, stride=2)
        self.relu = nn.ReLU(inplace=False)
        self.fc1 = nn.Linear(8*8*128, 128)
        self.fc2 = nn.Linear(128, 128)
        self.fc4 = nn.Linear(128, num_classes)


    def forward(self, x):
        x = self.relu(self.conv1(x))
        x = self.maxpool1(x)
        x = self.relu(self.conv22(x))
        x = self.maxpool2(x)
        x = self.maxpool3(self.relu(self.conv3(x)))
        x = self.maxpool4(self.relu(self.conv4(x)))
        x = self.relu(self.fc1(x.view(x.shape[0], -1)))
        x = self.relu(self.fc2(x))
        x = self.fc4(x)
        return x

这是predictor_py

from facenet_pytorch import MTCNN, InceptionResnetV1, extract_face
import torch
import torchvision
from torchvision import transforms
from torch.nn import functional as F
from PIL import Image
from sklearn.externals import joblib
import numpy as np
import os
import torch_model


class MyPredictor(object):

    import torch
    import torchvision

    def __init__(self, model, preprocessor, device):
        """Stores artifacts for prediction. Only initialized via `from_path`.
        """
        self._resnet = model
        self._mtcnn_mult = preprocessor
        self._device = device
        self.get_std_tensor = transforms.Compose([
            np.float32,
            np.uint8,
            transforms.ToTensor(),
        ])
        self.tensor2pil = transforms.ToPILImage(mode='RGB')
        self.trans_resnet = transforms.Compose([
            transforms.Resize((100, 100)),
            np.float32,
            transforms.ToTensor()
        ])

    def predict(self, instances, **kwargs):

        pil_transform = transforms.Resize((512, 512))

        imarr = np.asarray(instances)
        pil_im = Image.fromarray(imarr)
        image = pil_im.convert('RGB')
        pil_im_512 = pil_transform(image)

        boxes, _ = self._mtcnn_mult(pil_im_512)
        box = boxes[0]

        face_tensor = extract_face(pil_im_512, box, margin=40)
        std_tensor = self.get_std_tensor(face_tensor.permute(1, 2, 0))
        cropped_pil_im = self.tensor2pil(std_tensor)

        face_tensor = self.trans_resnet(cropped_pil_im)
        face_tensor4d = face_tensor.unsqueeze(0)
        face_tensor4d = face_tensor4d.to(self._device)

        prediction = self._resnet(face_tensor4d)
        preds = F.softmax(prediction, dim=1).detach().numpy().reshape(-1)
        print('probability of (class1, class2) = ({:.4f}, {:.4f})'.format(preds[0], preds[1]))

        return preds.tolist()

    @classmethod
    def from_path(cls, model_dir):
        import torch
        import torchvision
        import torch_model

        model_path = os.path.join(model_dir, 'class44_M40RefinedExtra_bin_no_norm_7860.joblib')
        classifier = joblib.load(model_path)

        mtcnn_path = os.path.join(model_dir, 'mtcnn_mult.joblib')
        mtcnn_mult = joblib.load(mtcnn_path)

        device_path = os.path.join(model_dir, 'device_cpu.joblib')
        device = joblib.load(device_path)

        return cls(classifier, mtcnn_mult, device)

并且setup.py

from setuptools import setup

REQUIRED_PACKAGES = ['opencv-python-headless', 'facenet-pytorch']

setup(
 name="my_package",
 version="0.1",
 include_package_data=True,
 scripts=["predictor.py", "torch_model.py"],
 install_requires=REQUIRED_PACKAGES
)
4

1 回答 1

2

解决方案是将以下包放在setup.py自定义预测代码的文件中:

REQUIRED_PACKAGES = ['torchvision==0.5.0', 'torch @ https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl', 'opencv-python', 'facenet-pytorch']

然后我在自定义类实例化方面遇到了不同的问题,但这篇文章很好地解释了它。因此,我能够成功地将我的模型部署到 AI 平台进行预测。

于 2020-05-23T15:19:21.540 回答