嗨,我将推文分为 7 类。我有大约 250.000 条训练推文和另外 250.000 条不同的测试推文。我的代码可以在下面找到。training.pkl 是训练推文, testing.pkl 是测试推文。如您所见,我也有相应的标签。
当我执行我的代码时,我发现将测试集(原始)转换为特征空间需要 14.9649999142 秒。我还测量了对测试集中的所有推文进行分类所需的时间,即 0.131999969482 秒。
尽管在我看来,这个框架不太可能在 0.131999969482 秒内对大约 250.000 条推文进行分类。我现在的问题是,这是正确的吗?
file = open("training.pkl", 'rb')
training = cPickle.load(file)
file.close()
file = open("testing.pkl", 'rb')
testing = cPickle.load(file)
file.close()
file = open("ground_truth_testing.pkl", 'rb')
ground_truth_testing = cPickle.load(file)
file.close()
file = open("ground_truth_training.pkl", 'rb')
ground_truth_training = cPickle.load(file)
file.close()
print 'data loaded'
tweetsTestArray = np.array(testing)
tweetsTrainingArray = np.array(training)
y_train = np.array(ground_truth_training)
# Transform dataset to a design matrix with TFIDF and 1,2 gram
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, ngram_range=(1, 2))
X_train = vectorizer.fit_transform(tweetsTrainingArray)
print "n_samples: %d, n_features: %d" % X_train.shape
print 'COUNT'
_t0 = time.time()
X_test = vectorizer.transform(tweetsTestArray)
print "n_samples: %d, n_features: %d" % X_test.shape
_t1 = time.time()
print _t1 - _t0
print 'STOP'
# TRAINING & TESTING
print 'SUPERVISED'
print '----------------------------------------------------------'
print
print 'SGD'
#Initialize Stochastic Gradient Decent
sgd = linear_model.SGDClassifier(loss='modified_huber',alpha = 0.00003, n_iter = 25)
#Train
sgd.fit(X_train, ground_truth_training)
#Predict
print "START COUNT"
_t2 = time.time()
target_sgd = sgd.predict(X_test)
_t3 = time.time()
print _t3 -_t2
print "END COUNT"
# Print report
report_sgd = classification_report(ground_truth_testing, target_sgd)
print report_sgd
print
X_train 打印
<248892x213162 sparse matrix of type '<type 'numpy.float64'>'
with 4346880 stored elements in Compressed Sparse Row format>
X_train 打印
<249993x213162 sparse matrix of type '<type 'numpy.float64'>'
with 4205309 stored elements in Compressed Sparse Row format>