0

这可能是一个简单的问题,但我现在已经被困了一段时间。我想在亚马逊 AWS 上训练 FCN。为此,我想将此示例中使用的过程(https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/semantic_segmentation_pascalvoc/semantic_segmentation_pascalvoc.ipynb)与我自己的数据集一起使用。

与该过程相反,我将我的训练和注释图像(如 .png)保存在一个 S3 存储桶中,其中包含四个文件夹(Training、TrainingAnnotation、Validation、ValidationAnnotation)。Training 和 Annotation 文件夹中的文件具有相同的名称。

我用以下代码训练了我的模型:

%%time
import sagemaker
from sagemaker import get_execution_role

role = get_execution_role()
print(role)

bucket = sess.default_bucket()  
prefix = 'semantic-segmentation'
print(bucket)

from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'semantic-segmentation', repo_version="latest")
print (training_image)

s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
print(s3_output_location)

# Create the sagemaker estimator object.
ss_model = sagemaker.estimator.Estimator(training_image,
                                         role, 
                                         train_instance_count = 1, 
                                         train_instance_type = 'ml.p2.xlarge',
                                         train_volume_size = 50,
                                         train_max_run = 360000,
                                         output_path = s3_output_location,
                                         base_job_name = 'ss-notebook-demo',
                                         sagemaker_session = sess)
num_training_samples=5400
# Setup hyperparameters 
ss_model.set_hyperparameters(backbone='resnet-50', 
                             algorithm='fcn',                   
                             use_pretrained_model='True', 
                             crop_size=248, .                             
                             num_classes=4, 
                             epochs=10, 
                             learning_rate=0.0001,                             
                             optimizer='rmsprop', 'adam', 'rmsprop', 'nag', 'adagrad'.
                             lr_scheduler='poly', 'cosine' and 'step'.                           
                             mini_batch_size=16, 
                             validation_mini_batch_size=16,
                             early_stopping=True, 
                             early_stopping_patience=2, 
                             early_stopping_min_epochs=10,    
                             num_training_samples=num_training_samples) 
# Create full bucket names

bucket1 = 'imagelabel1' 
train_channel = 'Training'
validation_channel = 'Validation'
train_annotation_channel = 'TrainingAnnotation'
validation_annotation_channel =  'ValidataionAnnotation'


s3_train_data = 's3://{}/{}'.format(bucket1, train_channel)
s3_validation_data = 's3://{}/{}'.format(bucket1, validation_channel)
s3_train_annotation = 's3://{}/{}'.format(bucket1, train_annotation_channel)
s3_validation_annotation  = 's3://{}/{}'.format(bucket1, validation_annotation_channel)



distribution = 'FullyReplicated'
# Create sagemaker s3_input objects
train_data = sagemaker.session.s3_input(s3_train_data, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')
train_annotation = sagemaker.session.s3_input(s3_train_annotation, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')
validation_annotation = sagemaker.session.s3_input(s3_validation_annotation, distribution=distribution, 
                                        content_type='image/png', s3_data_type='S3Prefix')

data_channels = {'train': train_data, 
                 'validation': validation_data,
                 'train_annotation': train_annotation, 
                 'validation_annotation':validation_annotation}
s3://imagelabel1/Training
ss_model.fit(inputs=data_channels, logs=True)

错误消息是:

ValueError:训练作业 ss-notebook-demo-2019-07-15-06-42-25-784 出错:失败原因:ClientError:训练通道为空。

有人知道本准则有什么问题吗?

谢谢

西蒙

4

1 回答 1

0

看起来您的文件夹层次结构没有使用正确的名称。根据文档(https://docs.aws.amazon.com/sagemaker/latest/dg/semantic-segmentation.html#semantic-segmentation-inputoutput),它应该如下所示:

s3://bucket_name
    |
    |- train
                 |
                 | - 0000.jpg
                 | - coffee.jpg
    |- validation
                 |
                 | - 00a0.jpg
                 | - bananna.jpg              
    |- train_annotation
                 |
                 | - 0000.png
                 | - coffee.png
    |- validation_annotation
                 |
                 | - 00a0.png   
                 | - bananna.png 
    |- label_map
                 | - train_label_map.json  
                 | - validation_label_map.json 

修复这些前缀应该可以解决您的问题:

train_channel = 'Training'
validation_channel = 'Validation'
train_annotation_channel = 'TrainingAnnotation'
validation_annotation_channel =  'ValidataionAnnotation'
于 2019-07-18T10:13:15.837 回答