GPflow 文档提供了一个使用robust-max 函数进行多类分类的示例。我正在尝试用 softmax 似然训练一个多类分类器,这也在GPflow 中实现,但我找不到任何关于如何正确使用它的文档或示例。
请在下面找到我尝试过的示例。在训练期间,损失平滑地减少。
上面提到的robust-max 示例使用分类标签,即值0、1、2,但简单地用softmax 似然替换robust-max 会在正交方法中引发IndexError 。因此,我假设这个具有 softmax 似然性的模型需要 one-hot 编码标签。然而,在测试时,我注意到对于这个三类玩具示例,模型永远不会预测类 3。仔细检查后,softmax 似然有以下方法
def _log_prob(self, F, Y):
return -tf.nn.sparse_softmax_cross_entropy_with_logits(logits=F, labels=Y[:, 0])
看起来它需要一个 shape 的分类标签数组[num_samples, 1]
。
将 softmax 似然用于 GP 多类分类的正确方法是什么?
import numpy as np
import tensorflow as tf
import gpflow
from gpflow.likelihoods.multiclass import Softmax
from tqdm.auto import tqdm
np.random.seed(0)
tf.random.set_seed(123)
# Number of functions and number of data points
num_classes = 3
N = 100
# Create training data
# Jitter
jitter_eye = np.eye(N) * 1e-6
# Input
X = np.random.rand(N, 1)
# SquaredExponential kernel matrix
kernel_se = gpflow.kernels.SquaredExponential(lengthscales=0.1)
K = kernel_se(X) + jitter_eye
# Latents prior sample
f = np.random.multivariate_normal(mean=np.zeros(N), cov=K, size=(num_classes)).T
# Hard max observation
Y = np.argmax(f, 1).reshape(-1,).astype(int)
# One-hot encoding
Y_hot = np.zeros((N, num_classes), dtype=np.int)
Y_hot[np.arange(N), Y] = 1
data = (X, Y_hot)
# sum kernel: Matern32 + White
kernel = gpflow.kernels.Matern32() + gpflow.kernels.White(variance=0.01)
likelihood = Softmax(num_classes)
m = gpflow.models.VGP(
data=data,
kernel=kernel,
likelihood=likelihood,
num_latent_gps=num_classes,
)
def run_adam(model, iterations):
"""
Utility function running the Adam optimizer
"""
# Create an Adam Optimizer action
losses = []
training_loss = model.training_loss
optimizer = tf.optimizers.Adam()
@tf.function
def optimization_step():
optimizer.minimize(training_loss, model.trainable_variables)
for step in tqdm(range(iterations), total=iterations):
optimization_step()
if step % 10 == 0:
elbo = -training_loss().numpy()
losses.append(elbo)
return losses
run_adam(model=m, iterations=10000)
y_pred = m.predict_y(X)[0]
print("Training accuracy: {:3.2f}".format(np.mean(Y == np.argmax(y_pred, axis=1))))