0

我正在尝试调整 pegasos 算法参数,但没有运气。

这是使用 6 个一维点的简单示例。

void dlib_svm_test()
{
    for(double lambda= 10e-10;lambda<=10e10;lambda*=10)
    {
        typedef dlib::matrix<double, 0, 1> sample_type;
        typedef dlib::linear_kernel<sample_type> kernel_type;

        dlib::svm_pegasos<kernel_type> pegasos;

        cout << lambda <<endl;

        pegasos.set_lambda(lambda);
        pegasos.set_kernel(kernel_type());

        std::vector<sample_type> samples(6);
        sample_type sample;
        sample.set_size(1);

        sample(0)= 188.0d;
        samples[0]= sample;
        sample(0)= 168.0d;
        samples[1]= sample;
        sample(0)= 191.0d;
        samples[2]= sample;
        sample(0)= 150.0d;
        samples[3]= sample;
        sample(0)= 154.0d;
        samples[4]= sample;
        sample(0)= 124.0d;
        samples[5]= sample;

        pegasos.train(samples[0],+1);
        pegasos.train(samples[1],+1);
        pegasos.train(samples[2],+1);
        pegasos.train(samples[3],-1);
        pegasos.train(samples[4],-1);
        pegasos.train(samples[5],-1);

        cout<< pegasos(samples[0]) <<endl;
        cout<< pegasos(samples[1]) <<endl;
        cout<< pegasos(samples[2]) <<endl;
        cout<< pegasos(samples[3]) <<endl;
        cout<< pegasos(samples[4]) <<endl;
        cout<< pegasos(samples[5]) <<endl;

        pegasos.clear();
    }
}

我得到的输出:

0.0000000010
-3963387.1199921928
-3541750.1923335334
-4026632.6591409920
-3162276.9574407390
-3246604.3429724714
-2614148.9514844813
0.0000000100
-1253333.0548153266
-1119999.7511116527
-1273333.0503708781
-999999.7777783460
-1026666.4385190808
-826666.4829635697
0.0000001000
-396338.7119995961
-354175.0192337657
-402663.2659144707
-316227.6957445183
-324660.4342976844
-261414.8951489388
0.0000010000
-125333.3054819095
-111999.9751115777
-127333.3050374593
-99999.9777782790
-102666.6438523454
-82666.6482968476
0.0000100000
-39633.8712003365
-35417.5019237890
-40266.3265918186
-31622.7695748963
-32466.0434302058
-26141.4895153846
0.0001000000
-12533.3305485679
-11199.9975115703
-12733.3305041176
-9999.9977782724
-10266.6643856720
-8266.6648301755
0.0010000000
-3963.3871204108
-3541.7501927916
-4026.6326595536
-3162.2769579343
-3246.6043434582
-2614.1489520294
0.0100000000
-1253.3330552344
-1119.9997515702
-1273.3330507840
-999.9997782725
-1026.6664390053
-826.6664835091
0.1000000000
-396.3387124203
-354.1750196940
-402.6632663292
-316.2276962404
-324.6604347856
-261.4148956963
1.0000000000
-125.3333059077
-111.9999755772
-127.3333054573
-99.9999782797
-102.6666443458
-82.6666488500
10.0000000000
-39.6338716427
-35.4175024067
-40.2663270281
-31.6227700943
-32.4660439415
-26.1414900875
100.0000000000
-12.5333310483
-11.1999980544
-12.7333309973
-9.9999983600
-10.2666649587
-8.2666654680
1000.0000000000
-3.7091542406
-3.3145634810
-3.7683428546
-2.9594317974
-3.0383499493
-2.4464638100
10000.0000000000
-0.4292670207
-0.3836003494
-0.4361170215
-0.3425003451
-0.3516336794
-0.2831336723
100000.0000000000
0.0372866667
0.0333200000
0.0378816667
0.0297500000
0.0305433333
0.0245933333
1000000.0000000000
0.0037286667
0.0033320000
0.0037881667
0.0029750000
0.0030543333
0.0024593333
10000000.0000000000
0.0003728667
0.0003332000
0.0003788167
0.0002975000
0.0003054333
0.0002459333
100000000.0000000000
0.0000372867
0.0000333200
0.0000378817
0.0000297500
0.0000305433
0.0000245933
1000000000.0000000000
0.0000037287
0.0000033320
0.0000037882
0.0000029750
0.0000030543
0.0000024593
10000000000.0000000000
0.0000003729
0.0000003332
0.0000003788
0.0000002975
0.0000003054
0.0000002459
100000000000.0000000000
0.0000000373
0.0000000333
0.0000000379
0.0000000297
0.0000000305
0.0000000246

所以我得到所有样本预测为负或正的问题。

更新:

问题解决了:

https://github.com/davisking/dlib/issues/49

4

1 回答 1

1

这是一种基于随机梯度下降的在线学习算法。每次调用 train() 都需要一个梯度步骤,因此您必须调用 train() 超过 6 次。

您也可能更好地使用批处理算法而不是在线算法。使用本指南选择适合您任务的指南:http: //dlib.net/ml_guide.svg

于 2015-10-17T13:09:23.413 回答