我正在用 Java 构建我的第一个神经网络,并且我正在在线关注这个 C++ 示例
vector<double> CNeuralNet::Update(vector<double> &inputs)
{
//stores the resultant outputs from each layer
vector<double> outputs;
int cWeight = 0;
//first check that we have the correct amount of inputs
if (inputs.size() != m_NumInputs)
{
//just return an empty vector if incorrect.
return outputs;
}
//For each layer....
for (int i=0; i<m_NumHiddenLayers + 1; ++i)
{
if ( i > 0 )
{
inputs = outputs;
}
outputs.clear();
cWeight = 0;
//for each neuron sum the (inputs * corresponding weights).Throw
//the total at our sigmoid function to get the output.
for (int j=0; j<m_vecLayers[i].m_NumNeurons; ++j)
{
double netinput = 0;
int NumInputs = m_vecLayers[i].m_vecNeurons[j].m_NumInputs;
//for each weight
for (int k=0; k<NumInputs - 1; ++k)
{
//sum the weights x inputs
netinput += m_vecLayers[i].m_vecNeurons[j].m_vecWeight[k] *
inputs[cWeight++];
}
//add in the bias
netinput += m_vecLayers[i].m_vecNeurons[j].m_vecWeight[NumInputs-1] *
CParams::dBias;
//we can store the outputs from each layer as we generate them.
//The combined activation is first filtered through the sigmoid
//function
outputs.push_back(Sigmoid(netinput, CParams::dActivationResponse));
cWeight = 0;
}
}
return outputs;
}
关于这段代码,我有两个问题。首先,看似……奇怪的输入到输出的分配
//For each layer....
for (int i=0; i<m_NumHiddenLayers + 1; ++i)
{
if ( i > 0 )
{
inputs = outputs;
}
outputs.clear();
这部分真的让我很困惑。他刚刚创建了输出……他为什么要将输出分配给输入?另外,为什么要++i?据我所知,在他之前的代码中,他仍然使用索引 [0],这就是我正在做的事情。为什么突然改变?有理由留下最后一个吗?我知道如果没有其余的代码示例,这可能是一个很难看到的问题......
我的第二个问题是
//add in the bias
netinput += m_vecLayers[i].m_vecNeurons[j].m_vecWeight[NumInputs-1] *
CParams::dBias;
//we can store the outputs from each layer as we generate them.
//The combined activation is first filtered through the sigmoid
//function
outputs.push_back(Sigmoid(netinput, CParams::dActivationResponse));
CParams::dBias 和 CParams::dActivationResponse 不会出现在此之前的任何位置。我现在为此创建了两个静态最终全局变量。我在正确的轨道上吗?
任何帮助,将不胜感激。这是一个个人项目,自从两周前我第一次了解到这个主题以来,我一直无法停止思考这个主题。