2

我想对 pyspark 数据框中的字符串列进行一些 NLP 分析。

东风:

 year month u_id rating_score p_id review
 2010 09    tvwe  1           p_5  I do not like it because its size is not for me.  
 2011 11    frsa  1           p_7  I am allergic to the peanut elements.  
 2015 5     ybfd  1           p_2  It is a repeated one, please no more.
 2016 7     tbfb  2           p_2  It is not good for my oil hair.
 

每个 p_id 代表一个项目。每个 u_id 可能对每个项目都有一些评论。评论可以是几个词,一个句子或一个段落,甚至是表情符号。

我想找出这些项目被评为低或高的根本原因。例如,有多少“u_id”抱怨物品的尺寸、化学元素过敏或其他与物品特性相关的问题。

How to iterate over rows in a DataFrame in Pandas 中,我了解到将数据帧转换为 numpy 数组然后使用向量化进行 NLP 分析更有效。

我正在尝试使用 SparkNLP 按年、月、u_id、p_id 为每个评论提取形容词和名词短语。

我不确定如何应用 numpy 矢量化来非常有效地做到这一点。

我的py3代码:

from sparknlp.pretrained import PretrainedPipeline
df = spark.sql('select year, month, u_id, p_id, comment from MY_DF where rating_score = 1 and isnull(comment) = false')
import numpy as np

trainseries = df['comment'].apply(lambda x : np.array(x.toArray())).as_matrix().reshape(-1,1)

text = np.apply_along_axis(lambda x : x[0], 1, trainseries) # TypeError: 'Column' object is not callable

pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en') # 
result = pipeline_dl.fullAnnotate(text)

代码不起作用。我还需要在向量化中保留其他列(例如年、月、u_id、p_id),并确保 NLP 分析结果可以与年、月、u_id、p_id 很好地对齐。

我不喜欢 如何将 pyspark 数据框列转换为 numpy 数组,因为 collect() 太慢了。

谢谢

4

1 回答 1

1

IIUC,您不需要 Numpy(Spark 在内部处理矢量化),只需执行transform然后从结果数据帧中选择和过滤正确的信息:

from sparknlp.pretrained import PretrainedPipeline

df = spark.sql('select year, month, u_id, p_id, comment from MY_DF where rating_score = 1 and isnull(comment) = false')

df1 = df.withColumnRenamed('comment', 'text')

pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')

result = pipeline_dl.transform(df1)

df_new = result.selectExpr(
  *df1.columns,
  'transform(filter(pos, p -> p.result rlike "^(?:NN|JJ)"), x -> x.metadata.word) as words'
)

输出:

df_new.show(10,0)
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
|years|month|u_id|rating_score|p_id|text                                            |words                       |
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+
|2010 |09   |tvwe|1           |p_5 |I do not like it because its size is not for me.|[size]                      |
|2011 |11   |frsa|1           |p_7 |I am allergic to the peanut elements.           |[allergic, peanut, elements]|
|2015 |5    |ybfd|1           |p_2 |It is a repeated one, please no more.           |[more]                      |
|2016 |7    |tbfb|2           |p_2 |It is not good for my oil hair.                 |[good, oil, hair]           |
+-----+-----+----+------------+----+------------------------------------------------+----------------------------+

笔记:

(1)result = pipeline.fullAnnotate(df,'comment')是重命名commenttext然后做的捷径pipeline.transform(df1)fullAnnotate的第一个参数可以是 DataFrame、List 或 String。

(2) 来自https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html的 POS 标签列表

于 2020-08-26T15:48:02.550 回答