7

到目前为止,我读过的许多论文都提到了“预训练网络可以提高反向传播误差方面的计算效率”,并且可以使用 RBM 或自动编码器来实现。

  1. 如果我理解正确,自动编码器通过学习恒等函数来工作,如果它的隐藏单元小于输入数据的大小,那么它也会进行压缩,但这与提高传播错误的计算效率有什么关系向后信号?是因为预训练隐藏单元的权重与其初始值相差不大吗?

  2. Assuming data scientists who are reading this would by theirselves know already that AutoEncoders take inputs as target values since they are learning identity function, which is regarded as unsupervised learning, but can such method be applied to Convolutional Neural Networks for which the first hidden layer is feature map? Each feature map is created by convolving a learned kernel with a receptive field in the image. This learned kernel, how could this be obtained by pre-training (unsupervised fashion)?

4

2 回答 2

11

One thing to note is that autoencoders try to learn the non-trivial identify function, not the identify function itself. Otherwise they wouldn't have been useful at all. Well the pre-training helps moving the weight vectors towards a good starting point on the error surface. Then the backpropagation algorithm, which is basically doing gradient descent, is used improve upon those weights. Note that gradient descent gets stuck in the closes local minima.

enter image description here

[Ignore the term Global Minima in the image posted and think of it as another, better, local minima]

Intuitively speaking, suppose you are looking for an optimal path to get from origin A to destination B. Having a map with no routes shown on it (the errors you obtain at the last layer of the neural network model) kind of tells you where to to go. But you may put yourself in a route which has a lot of obstacles, up hills and down hills. Then suppose someone tells you about a route a a direction he has gone through before (the pre-training) and hands you a new map (the pre=training phase's starting point).

This could be an intuitive reason on why starting with random weights and immediately start to optimize the model with backpropagation may not necessarily help you achieve the performance you obtain with a pre-trained model. However, note that many models achieving state-of-the-art results do not use pre-training necessarily and they may use the backpropagation in combination with other optimization methods (e.g. adagrad, RMSProp, Momentum and ...) to hopefully avoid getting stuck in a bad local minima.

enter image description here

Here's the source for the second image.

于 2015-12-29T17:44:15.547 回答
0

I don't know a lot about autoencoder theory, but I've done a bit of work with RBMs. What RBMs do is they predict what the probability is of seeing the specific type of data in order to get the weights initialized to the right ball park- it is considered an (unsupervised) probabilistic model, so you don't correct using the known labels. Basically, the idea here is that having a learning rate that is too big will never lead to convergence but having one that is too small will take forever to train. Thus, by "pretraining" in this way you find out the ball park of the weights and then can set the learning rate to be small in order to get them down to the optimal values.

As for the second question, no, you don't generally prelearn kernels, at least not in an unsupervised fashion. I suspect that what is meant by pretraining here is a bit different than in your first question- this is to say, that what is happening is that they are taking a pretrained model (say from model zoo) and fine tuning it with a new set of data.

Which model you use generally depends on the type of data you have and the task at hand. Convnets I've found to train faster and efficiently, but not all data has meaning when convolved, in which case dbns may be the way to go. Unless say, you have a small amount of data then I'd use something other than neural networks entirely.

Anyways, I hope this helps clear some of your questions.

于 2015-12-29T17:34:27.320 回答