我正在关注https://scikit-learn.org/stable/auto_examples/neural_networks/plot_mnist_filters.html#sphx-glr-auto-examples-neural-networks-plot-mnist-filters-py上的示例,我正在尝试弄清楚我对示例中输入和输出层中的节点数量的理解是否正确。所需代码如下:
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from sklearn.neural_network import MLPClassifier
print(__doc__)
# Load data from https://www.openml.org/d/554
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
X = X / 255.
# rescale the data, use the traditional train/test split
X_train, X_test = X[:60000], X[60000:]
y_train, y_test = y[:60000], y[60000:]
mlp = MLPClassifier(hidden_layer_sizes=(50,), max_iter=10, alpha=1e-4,
solver='sgd', verbose=10, random_state=1,
learning_rate_init=.1)
mlp.fit(X_train, y_train)
score = mlp.score(X_test, y_test)
根据https://dudeperf3ct.github.io/mlp/mnist/2018/10/08/Force-of-Multi-Layer-Perceptron/,该示例说明输入层中有 784 个节点(我假设shape
来自数据)和输出层的 10 个节点,每个数字 1 个节点。
MLPClassifier
上面代码中的情况是这样吗?
谢谢你,一些澄清会很棒!