0

我的目标是识别我的数据集中包含大约 10 个分类和/或数字列和 3 个文本描述列的集群。经过一些研究,我想到了一个3 个步骤的过程

  • 预处理我的数据(规范化我的 10 列并对文本数据执行 tf-idf - 形状类似于 (89,000, 41206) )经过一些处理后,我使用如下的列转换器:
    column_trans = ColumnTransformer([
                                      ('scale', StandardScaler(), make_column_selector(dtype_include=np.number)),
                                      ('res_vec', TfidfVectorizer(), "Résumé de l'incident"),
                                    ('desc_vec', TfidfVectorizer(), "Description de l'incident")],remainder='drop')
    
    #On applique l'objet de transformation à note dataframe
    all_features = column_trans.fit_transform(df_incidents_sample)

(我也尝试使用 PCA:

#First, data normalization 
scaler = StandardScaler().fit(X)
X_scaled = scaler.transform(X)

from sklearn.decomposition import PCA

pca = PCA(.70)
pca.fit(X_scaled)
principalComponents = pca.components_
print("Percentage of variance explained: ")
print(pca.explained_variance_ratio_)
print("Main components:")
print(principalComponents)
Percentage of variance explained:  
[0.18618277 0.17050933 0.10841001 0.09733908 0.09186758 0.08251782] 

Main components:
[[ 0.14725228  0.37825793  0.36558713  0.11637642 -0.22776482  0.46478375
   0.26814039  0.37555349  0.39590524  0.22463055]
 [-0.46043277  0.39805237  0.37268412  0.22276568  0.49565864 -0.02403753
   0.14180977  0.07271966 -0.33350997 -0.24115478]
 [-0.30192161  0.18580638 -0.12840671 -0.71123187 -0.02576491  0.10946048
   0.47718378 -0.31007677  0.02038784  0.12274863]
 [ 0.26901203  0.09679569 -0.30329614  0.41158977  0.11026846 -0.24897028
   0.62929629 -0.23384344  0.2611964  -0.2525925 ]
 [ 0.1235864   0.12176666  0.0547025   0.12728051  0.27585949 -0.33158646
   0.02475187 -0.12885138 -0.08494957  0.86036434]
 [-0.30114986 -0.2197743  -0.24955475 -0.09226451  0.00559164 -0.35950503
   0.24902454  0.76731762  0.06424171  0.07762742]]

但结果似乎并不真正相关和可用)

  • 构建一个自动编码器来减少我的数据集的维度。首先,我将数据分成 2 份,然后创建自动编码器:
    x_train, x_test = train_test_split(all_features, test_size=0.2)
    
    x_train = x_train.astype('float32')
    x_test = x_test.astype('float32')

    input_size = 41206
    hidden_size = 1280
    code_size = 32
    
    input_data = Input(shape=(input_size,))
    
    hidden_1 = Dense(hidden_size, activation='relu')(input_data)
    code = Dense(code_size, activation='relu')(hidden_1)
    hidden_2 = Dense(hidden_size, activation='relu')(code)
    output_data = Dense(input_size, activation='sigmoid')(hidden_2)
    
    autoencoder = Model(input_data, output_data)
    autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
    autoencoder.fit(x_train, x_train, epochs=3)
  • 使用经典的聚类 ML 算法(knn、dbscan 或其他)

所以我有两个主要问题:

  • 您对这些信息的信心程度如何,它会起作用?
  • 我无法创建我的自动编码器。当我试图把它放在我的数据上时......
    # train the model
    autoencoder.fit(x_train,
                    x_train,
                    epochs=50,
                    batch_size=256,
                    shuffle=True)
    
    autoencoder.summary()

...我有一个错误:

TypeError:无法将 <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> 类型的对象转换为张量。内容:SparseTensor(indices=Tensor("DeserializeSparse_1:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse_1:1", shape=(None,), dtype=float32), dense_shape =Tensor("stack_1:0", shape=(2,), dtype=int64))。考虑将元素转换为支持的类型。

我对我的错误进行了一些研究,我发现这个gitub 主题通过建议创建一个 SparseToDense-Layer 来提供解决方案。但是我很难将此解决方案适应我的代码。

提前感谢大家花时间阅读我;)

梅德里克

4

1 回答 1

0

对于这样的事情,我倾向于进行特征重要性实验。

试试这个代码。

# Let's load the packages
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.inspection import permutation_importance
from matplotlib import pyplot as plt

plt.rcParams.update({'figure.figsize': (12.0, 8.0)})
plt.rcParams.update({'font.size': 14})

# Load the data set and split for training and testing.
boston = load_boston()
X = pd.DataFrame(boston.data, columns=boston.feature_names)
y = boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=12)

# Fit the Random Forest Regressor with 100 Decision Trees:
rf = RandomForestRegressor(n_estimators=100)
rf.fit(X_train, y_train)

# To get the feature importances from the Random Forest model use the feature_importances_ attribute:
rf.feature_importances_

sorted_idx = rf.feature_importances_.argsort()
plt.barh(boston.feature_names[sorted_idx], rf.feature_importances_[sorted_idx])
plt.xlabel("Random Forest Feature Importance")

有关更多信息,请参阅下面的链接。

https://towardsdatascience.com/feature-selection-with-pandas-e3690ad8504b

于 2021-09-18T15:28:21.097 回答