I am using the PCAFast method from the MLPY API in python (http://mlpy.sourceforge.net/docs/3.2/dim_red.html)
The method is executed pretty fast when it learns a feature matrix generated as follows:
x = np.random.rand(100, 100)
Sample output of this command is:
[[ 0.5488135 0.71518937 0.60276338 ..., 0.02010755 0.82894003
0.00469548]
[ 0.67781654 0.27000797 0.73519402 ..., 0.25435648 0.05802916
0.43441663]
[ 0.31179588 0.69634349 0.37775184 ..., 0.86219152 0.97291949
0.96083466]
...,
[ 0.89111234 0.26867428 0.84028499 ..., 0.5736796 0.73729114
0.22519844]
[ 0.26969792 0.73882539 0.80714479 ..., 0.94836806 0.88130699
0.1419334 ]
[ 0.88498232 0.19701397 0.56861333 ..., 0.75842952 0.02378743
0.81357508]]
However when the feature matrix x consists of data such as the following:
x = 7.55302582e-05*np.ones((n, d[i]))
Sample output:
[[ 7.55302582e-05 7.55302582e-05 7.55302582e-05 ..., 7.55302582e-05
7.55302582e-05 7.55302582e-05]
[ 7.55302582e-05 7.55302582e-05 7.55302582e-05 ..., 7.55302582e-05
7.55302582e-05 7.55302582e-05]
[ 7.55302582e-05 7.55302582e-05 7.55302582e-05 ..., 7.55302582e-05
7.55302582e-05 7.55302582e-05]
...,
[ 7.55302582e-05 7.55302582e-05 7.55302582e-05 ..., 7.55302582e-05
7.55302582e-05 7.55302582e-05]
[ 7.55302582e-05 7.55302582e-05 7.55302582e-05 ..., 7.55302582e-05
7.55302582e-05 7.55302582e-05]
[ 7.55302582e-05 7.55302582e-05 7.55302582e-05 ..., 7.55302582e-05
7.55302582e-05 7.55302582e-05]]
The method becomes very very slow... Why does this happen ? Does this have something to do with the type of the data stored in the x feature matrix ?
Any ideas on how to solve this ?