131

所以我有一个数据集,我想删除停用词

stopwords.words('english')

我正在努力如何在我的代码中使用它来简单地取出这些单词。我已经有了这个数据集中的单词列表,我正在努力的部分是与这个列表进行比较并删除停用词。任何帮助表示赞赏。

4

14 回答 14

231
from nltk.corpus import stopwords
# ...
filtered_words = [word for word in word_list if word not in stopwords.words('english')]
于 2011-03-30T12:53:40.180 回答
20

你也可以做一个设置差异,例如:

list(set(nltk.regexp_tokenize(sentence, pattern, gaps=True)) - set(nltk.corpus.stopwords.words('english')))
于 2012-03-26T22:25:10.447 回答
18

要排除所有类型的停用词,包括 nltk 停用词,您可以执行以下操作:

from stop_words import get_stop_words
from nltk.corpus import stopwords

stop_words = list(get_stop_words('en'))         #About 900 stopwords
nltk_words = list(stopwords.words('english')) #About 150 stopwords
stop_words.extend(nltk_words)

output = [w for w in word_list if not w in stop_words]
于 2017-10-27T14:31:34.763 回答
14

我想您有一个要从中删除停用词的单词列表(word_list)。你可以这样做:

filtered_word_list = word_list[:] #make a copy of the word_list
for word in word_list: # iterate over word_list
  if word in stopwords.words('english'): 
    filtered_word_list.remove(word) # remove word from filtered_word_list if it is a stopword
于 2011-03-30T12:51:52.163 回答
9

为此,有一个非常简单的轻量级 python 包stop-words

首先使用以下方法安装软件包: pip install stop-words

然后,您可以使用列表理解在一行中删除您的单词:

from stop_words import get_stop_words

filtered_words = [word for word in dataset if word not in get_stop_words('english')]

这个包下载起来非常轻量级(与 nltk 不同),适用于Python 2Python 3,并且它具有许多其他语言的停用词,例如:

    Arabic
    Bulgarian
    Catalan
    Czech
    Danish
    Dutch
    English
    Finnish
    French
    German
    Hungarian
    Indonesian
    Italian
    Norwegian
    Polish
    Portuguese
    Romanian
    Russian
    Spanish
    Swedish
    Turkish
    Ukrainian
于 2019-09-22T12:13:12.877 回答
5

使用textcleaner库从数据中删除停用词。

按照这个链接:https ://yugantm.github.io/textcleaner/documentation.html#remove_stpwrds

请按照以下步骤使用此库。

pip install textcleaner

安装后:

import textcleaner as tc
data = tc.document(<file_name>) 
#you can also pass list of sentences to the document class constructor.
data.remove_stpwrds() #inplace is set to False by default

使用上面的代码删除停用词。

于 2019-02-12T12:30:08.127 回答
5

这是我对此的看法,以防您想立即将答案转换为字符串(而不是过滤后的单词列表):

STOPWORDS = set(stopwords.words('english'))
text =  ' '.join([word for word in text.split() if word not in STOPWORDS]) # delete stopwords from text
于 2020-02-08T21:01:06.360 回答
2

您可以使用此功能,您应该注意到您需要降低所有单词

from nltk.corpus import stopwords

def remove_stopwords(word_list):
        processed_word_list = []
        for word in word_list:
            word = word.lower() # in case they arenet all lower cased
            if word not in stopwords.words("english"):
                processed_word_list.append(word)
        return processed_word_list
于 2017-06-13T15:48:12.760 回答
2

虽然这个问题有点老了,但这里有一个新的库,值得一提,它可以做额外的任务。

在某些情况下,您不想只删除停用词。相反,您可能希望在文本数据中找到停用词并将其存储在列表中,以便您可以找到数据中的噪音并使其更具交互性。

该库称为'textfeatures'. 您可以按如下方式使用它:

! pip install textfeatures
import textfeatures as tf
import pandas as pd

例如,假设您有以下一组字符串:

texts = [
    "blue car and blue window",
    "black crow in the window",
    "i see my reflection in the window"]

df = pd.DataFrame(texts) # Convert to a dataframe
df.columns = ['text'] # give a name to the column
df

现在,调用 stopwords() 函数并传递你想要的参数:

tf.stopwords(df,"text","stopwords") # extract stop words
df[["text","stopwords"]].head() # give names to columns

结果将是:

    text                                 stopwords
0   blue car and blue window             [and]
1   black crow in the window             [in, the]
2   i see my reflection in the window    [i, my, in, the]

如您所见,最后一列包含该文档(记录)中的停用词。

于 2021-02-24T12:55:52.707 回答
2

使用过滤器

from nltk.corpus import stopwords
# ...  
filtered_words = list(filter(lambda word: word not in stopwords.words('english'), word_list))
于 2017-10-02T02:55:39.350 回答
1
from nltk.corpus import stopwords 

from nltk.tokenize import word_tokenize 

example_sent = "This is a sample sentence, showing off the stop words filtration."

  
stop_words = set(stopwords.words('english')) 
  
word_tokens = word_tokenize(example_sent) 
  
filtered_sentence = [w for w in word_tokens if not w in stop_words] 
  
filtered_sentence = [] 
  
for w in word_tokens: 
    if w not in stop_words: 
        filtered_sentence.append(w) 
  
print(word_tokens) 
print(filtered_sentence) 
于 2020-07-05T08:27:14.573 回答
0

如果您的数据存储为Pandas DataFrame.,您可以使用默认情况下remove_stopwords使用 NLTK 停用词列表的 textero 。

import pandas as pd
import texthero as hero
df['text_without_stopwords'] = hero.remove_stopwords(df['text'])
于 2020-06-02T06:58:10.463 回答
0

我将向您展示一些示例首先我从数据框(twitter_df)中提取文本数据以进一步处理如下

     from nltk.tokenize import word_tokenize
     tweetText = twitter_df['text']

然后标记化我使用以下方法

     from nltk.tokenize import word_tokenize
     tweetText = tweetText.apply(word_tokenize)

然后,要删除停用词,

     from nltk.corpus import stopwords
     nltk.download('stopwords')

     stop_words = set(stopwords.words('english'))
     tweetText = tweetText.apply(lambda x:[word for word in x if word not in stop_words])
     tweetText.head()

我认为这会对你有所帮助

于 2020-10-13T05:28:27.310 回答
-3
   import sys
print ("enter the string from which you want to remove list of stop words")
userstring = input().split(" ")
list =["a","an","the","in"]
another_list = []
for x in userstring:
    if x not in list:           # comparing from the list and removing it
        another_list.append(x)  # it is also possible to use .remove
for x in another_list:
     print(x,end=' ')

   # 2) if you want to use .remove more preferred code
    import sys
    print ("enter the string from which you want to remove list of stop words")
    userstring = input().split(" ")
    list =["a","an","the","in"]
    another_list = []
    for x in userstring:
        if x in list:           
            userstring.remove(x)  
    for x in userstring:           
        print(x,end = ' ') 
    #the code will be like this
于 2017-03-18T21:04:22.040 回答