1

对于监督学习,我的矩阵非常大,因此只有某些模型同意使用它。我读到 PCA 可以在很大程度上帮助降低维度。

下面是我的代码:

def run(command):
    output = subprocess.check_output(command, shell=True)
    return output

f = open('/Users/ya/Documents/10percent/Vik.txt','r')
vocab_temp = f.read().split()
f.close()
col = len(vocab_temp)
print("Training column size:")
print(col)

#dataset = list()

row = run('cat '+'/Users/ya/Documents/10percent/X_true.txt'+" | wc -l").split()[0]
print("Training row size:")
print(row)
matrix_tmp = np.zeros((int(row),col), dtype=np.int64)
print("Train Matrix size:")
print(matrix_tmp.size)
        # label_tmp.ndim must be equal to 1
label_tmp = np.zeros((int(row)), dtype=np.int64)
f = open('/Users/ya/Documents/10percent/X_true.txt','r')
count = 0
for line in f:
    line_tmp = line.split()
    #print(line_tmp)
    for word in line_tmp[0:]:
        if word not in vocab_temp:
            continue
        matrix_tmp[count][vocab_temp.index(word)] = 1
    count = count + 1
f.close()
print("Train matrix is:\n ")
print(matrix_tmp)
print(label_tmp)
print(len(label_tmp))
print("No. of topics in train:")
print(len(set(label_tmp)))
print("Train Label size:")
print(len(label_tmp))

我希望将 PCA 应用于 matrix_tmp,因为它的大小约为 (202180x9984)。如何修改我的代码以包含它?

4

2 回答 2

1
import codecs
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import CountVectorizer
with codecs.open('input_file', 'r', encoding='utf-8') as inf:
    lines = inf.readlines()
vectorizer = CountVectorizer(binary=True)
X_train = vectorizer.fit_transform(lines)
perform_pca = False
if perform_pca:
    n_components = 100
    pca = TruncatedSVD(n_components)
    X_train = pca.fit_transform(X_train)

1- 使用 sklearn 中可用的 verctorizer 进行矢量化,生成稀疏矩阵而不是具有大量零值的完整矩阵。

2- 仅在需要时进行 PCA

3- 如果需要,可以使用矢量化器和 pca 的参数来提高性能。

于 2016-01-17T02:45:06.847 回答
0

Scikit-learn 提供了几种 PCA 实现。一种有用的是TruncatedSVD. 它的用法相当简单:

from sklearn.decomposition import TruncatedSVD

n_components=100
pca = TruncatedSVD(n_components)
matrix_reduced = pca.fit_transform(matrix_tmp)
于 2016-01-16T06:11:58.150 回答