在 GPflow 2.0 中进行 GP 回归时,我想在长度尺度上设置硬边界(即限制长度尺度优化范围)。按照这个线程(在 GPflow 2.0 中设置超参数优化边界),我构建了一个 TensorFlow Bijector 链(见bounded_lengthscale
下面的函数)。但是,下面的双射器链不会阻止模型在假定的范围之外进行优化。我需要改变什么才能使bounded_lengthscale
函数对优化施加严格的限制?
下面是MRE:
import gpflow
import numpy as np
from gpflow.utilities import print_summary
import tensorflow as tf
from tensorflow_probability import bijectors as tfb
# Noisy training data
noise = 0.3
X = np.arange(-3, 4, 1).reshape(-1, 1).astype('float64')
Y = (np.sin(X) + noise * np.random.randn(*X.shape)).reshape(-1,1)
def bounded_lengthscale(low, high, lengthscale):
"""Returns lengthscale Parameter with optimization bounds."""
affine = tfb.AffineScalar(shift=low, scale=high-low)
sigmoid = tfb.Sigmoid()
logistic = tfb.Chain([affine, sigmoid])
parameter = gpflow.Parameter(lengthscale, transform=logistic, dtype=tf.float32)
parameter = tf.cast(parameter, dtype=tf.float64)
return parameter
# build GPR model
k = gpflow.kernels.Matern52()
m = gpflow.models.GPR(data=(X, Y), kernel=k)
m.kernel.lengthscale.assign(bounded_lengthscale(0, 1, 0.5))
print_summary(m)
# train model
@tf.function(autograph=False)
def objective_closure():
return - m.log_marginal_likelihood()
opt = gpflow.optimizers.Scipy()
opt_logs = opt.minimize(objective_closure,
m.trainable_variables)
print_summary(m)
谢谢!