0

我正在尝试使用 deepAR 的 GluonTS 实现在多个时间序列上训练 deepAR(使用 m5 数据集)。但是,当我在数据集中的单个时间序列上训练 deepAR 时,训练所需的时间与在 100 个(或更多)时间序列上训练模型所需的时间一样短。我花了几个小时试图了解可能出了什么问题,但我还没有找到任何潜在的解决方案。这是复制问题的代码,假设您已下载 m5 数据集:

from gluonts.mx import Trainer
from gluonts.evaluation import make_evaluation_predictions
from gluonts.model import deepar
from gluonts.mx.distribution.neg_binomial import NegativeBinomialOutput

import numpy as np
import pandas as pd

########################
##### PREPARING THE DATA
########################
prediction_length = 28
freq = "D"
start = pd.Timestamp("29-01-2011")

# load data
ste = pd.read_csv("sales_train_evaluation.csv")

# pandas Series of item 2
eva = ste.iloc[1,6:]

# 1-dimensional array containing time series data of item 2
item = np.array(ste.iloc[1,6:])

# Convert item to GluonTS-compatible ListDataset object
train_1 = ListDataset(
    [{'target': item[:-prediction_length], 'start':start}],
    freq=freq
)

# 2-dimensional array, containing time series data of 100 items
items = np.array(ste.iloc[1:101,6:])

# Convert to GluonTS-compatible ListDataset object
# train_100 contains 100 dictionaries, each corresponding to a given time series
train_100 = ListDataset(
    [{'target': ts, 'start':start} for ts in items[:, :-prediction_length]],
    freq=freq
)

########################
##### TRAINING THE MODEL
########################
nbo = NegativeBinomialOutput()
trainer = Trainer(epochs=5)

# train deepAR on 1 time series
estimator1 = deepar.DeepAREstimator(
    freq="D", prediction_length=28, trainer=trainer, distr_output=nbo
)
estimator1.train(training_data=train_1)

# train deepAR on 100 time series
estimator100 = deepar.DeepAREstimator(
    freq="D", prediction_length=28, trainer=trainer, distr_output=nbo
)
estimator100.train(training_data=train_100)
4

1 回答 1

1

是的,对于 100 个时间序列,这需要时间,您使用的是 gpu 还是 cpu?在我使用 GPU 时,它对我来说很好用

estimator = SimpleFeedForwardEstimator(
                    num_hidden_dimensions=[10],
                    prediction_length=custom_ds_metadata['prediction_length'],
                    context_length=2*custom_ds_metadata['prediction_length'],
                    freq=custom_ds_metadata['freq'],
                    trainer=Trainer(
                        ctx="gpu",
                        epochs=5,
                        learning_rate=1e-3,
                        hybridize=False,
                        num_batches_per_epoch=100
                    )
                )`enter code here`
于 2021-10-25T09:26:28.577 回答