3

我试图将这个连续 RBM 的 python 实现移植到 Matlab:http: //imonad.com/rbm/restricted-boltzmann-machine/

我生成了一个(嘈杂的)圆形形状的二维训练数据,并用 2 个可见层和 8 个隐藏层训练了 rbm。为了测试实现,我将均匀分布的随机数据馈送到 RBM 并绘制重建数据(与上面链接中使用的过程相同)。

现在令人困惑的部分:使用 (0,1)x(0,1) 范围内的训练数据,我得到非常令人满意的结果,但是使用 (-0.5,-0.5)x(-0.5,-0.5) 范围内的训练数据或 ( -1,0)x(-1,0) RBM 仅重建圆最右上方的数据。我不明白是什么原因造成的,这只是我没有看到的实现中的错误吗?

有些图,蓝点是训练数据,红点是重建数据。

这是我对 RBM 的实施: 培训:

maxepoch = 300;
ksteps = 10;
sigma = 0.2;        % cd standard deviation
learnW = 0.5;       % learning rate W
learnA  = 0.5;      % learning rate A
nVis = 2;           % number of visible units
nHid = 8;           % number of hidden units
nDat = size(dat, 1);% number of training data points
cost = 0.00001;     % cost
moment = 0.9;      % momentum
W = randn(nVis+1, nHid+1) / 10; % weights
dW = randn(nVis+1, nHid+1) / 1000; % change of weights
sVis = zeros(1, nVis+1);    % state of visible neurons
sVis(1, end) = 1.0;         % bias
sVis0 = zeros(1, nVis+1);   % initial state of visible neurons
sVis0(1, end) = 1.0;        % bias
sHid = zeros(1, nHid+1);    % state of hidden neurons
sHid(1, end) = 1.0;         % bias
aVis  = 0.1*ones(1, nVis+1);% A visible
aHid  = ones(1, nHid+1);    % A hidden
err = zeros(1, maxepoch);
e = zeros(1, maxepoch);
for epoch = 1:maxepoch
    wPos = zeros(nVis+1, nHid+1);
    wNeg = zeros(nVis+1, nHid+1);
    aPos = zeros(1, nHid+1);
    aNeg = zeros(1, nHid+1);
    for point = 1:nDat
        sVis(1:nVis) = dat(point, :);
        sVis0(1:nVis) = sVis(1:nVis); % initial sVis
        % positive phase
        activHid;
        wPos = wPos + sVis' * sHid;
        aPos = aPos + sHid .* sHid;
        % negative phase
        activVis;
        activHid;
        for k = 1:ksteps
            activVis;
            activHid;
        end
        tmp = sVis' * sHid;
        wNeg = wNeg + tmp;
        aNeg = aNeg + sHid .* sHid;
        delta = sVis0(1:nVis) - sVis(1:nVis);
        err(epoch) = err(epoch) + sum(delta .* delta);
        e(epoch) = e(epoch) - sum(sum(W' * tmp));
    end
    dW = dW*moment + learnW * ((wPos - wNeg) / numel(dat)) - cost * W;
    W = W + dW;
    aHid = aHid + learnA * (aPos - aNeg) / (numel(dat) * (aHid .* aHid));
    % error
    err(epoch) = err(epoch) / (nVis * numel(dat));
    e(epoch) = e(epoch) / numel(dat);
    disp(['epoch: ' num2str(epoch) ' err: ' num2str(err(epoch)) ...
    ' ksteps: ' num2str(ksteps)]);
end
save(['rbm_' filename '.mat'], 'W', 'err', 'aVis', 'aHid');

活动隐藏.m:

sHid = (sVis * W) + randn(1, nHid+1);
sHid = sigFun(aHid .* sHid, datRange);
sHid(end) = 1.; % bias

ActivVis.m:

sVis = (W * sHid')' + randn(1, nVis+1);
sVis = sigFun(aVis .* sVis, datRange);
sVis(end) = 1.; % bias

sigFun.m:

function [sig] = sigFun(X, datRange)
    a = ones(size(X)) * datRange(1);
    b = ones(size(X)) * (datRange(2) - datRange(1));
    c = ones(size(X)) + exp(-X);
    sig = a + (b ./ c);
end

重建:

nSamples = 2000;
ksteps = 10;
nVis = 2;
nHid = 8;
sVis = zeros(1, nVis+1);    % state of visible neurons
sVis(1, end) = 1.0;         % bias
sHid = zeros(1, nHid+1);    % state of hidden neurons
sHid(1, end) = 1.0;         % bias
input = rand(nSamples, 2);
output = zeros(nSamples, 2);
for sample = 1:nSamples
    sVis(1:nVis) = input(sample, :);
    for k = 1:ksteps
        activHid;
        activVis;
    end
    output(sample, :) = sVis(1:nVis);
end
4

2 回答 2

2

RBM 最初设计用于仅处理二进制数据。但也可以处理 0 到 1 之间的数据。它是算法的一部分。进一步阅读

于 2014-02-07T06:32:13.873 回答
2

由于 x 和 y 的输入都在 [0 1] 范围内,这就是它们留在那个区域的原因。将输入更改为input = (rand(nSamples, 2)*2) -1;从 [-1 1] 范围内采样的输入结果,因此红点将在圆圈周围更加分散。

于 2015-10-02T08:58:10.830 回答