0

我正在使用 pyspark 数据框。我需要执行 tf-idf ,为此我使用了spark NLP的标记化、规范化等先前步骤。

应用标记器后,我的 df 看起来像这样:

df.select('tokenizer').show(5, truncate = 130)

+----------------------------------------------------------------------------------------------------------------------------------+
|                                                                                                                  tokenized       |
+----------------------------------------------------------------------------------------------------------------------------------+
|[content, type, multipart, alternative, boundary, nextpart, da, df, nextpart, da, df, content, type, text, plain, charset, asci...|
|[receive, ameurht, eop, eur, prod, protection, outlook, com, cyprmb, namprd, prod, outlook, com, https, via, cyprca, namprd, pr...|
|[plus, every, photographer, need, mm, lens, digital, photography, school, email, newsletter, http, click, aweber, com, ct, l, m...|
|[content, type, multipart, alternative, boundary, nextpart, da, beb, nextpart, da, beb, content, type, text, plain, charset, as...|
|[original, message, customer, service, mailto, ilpjmwofnst, qssadxnvrvc, narrig, stepmotherr, eviews, com, send, thursday, dece...|
+----------------------------------------------------------------------------------------------------------------------------------+
only showing top 5 rows

下一步是应用规范化器:

我想设置多个清理模式:

1) remove all numerics and numerics from words
-> example: [jhghgb56, 5897t95, fhgbg4, 7474, hfgbgb]
-> expected output: [jhghgb, fhgbg, hfgbgb]

2) remove all words less than 4
-> example: [gfh, ehfufibf, hi, df, jdfh]
-> expected output: [ehfufibf, jdfh]

我试过这个:

tokenizer = Tokenizer()\
     .setInputCols(['document'])\
     .setOutputCol('tokenized')\
     .setMinLength(3)

cleanup = ["[^A-Za-z]"]
normalizer = Normalizer()\
     .setInputCols(['tokenized'])\
     .setOutputCol('normalized')\
     .setLowercase(True)\
     .setCleanupPatterns(cleanup)

至此cleanup = ["[^A-Za-z]"]满足第一个条件。但是现在我得到了少于 4 个字符的干净单词,我不明白如何删除这些单词。帮助将不胜感激!

4

0 回答 0