149

我正在使用 NLTK 对我的文本文件执行 kmeans 聚类,其中每一行都被视为一个文档。例如,我的文本文件是这样的:

belong finger death punch <br>
hasty <br>
mike hasty walls jericho <br>
jägermeister rules <br>
rules bands follow performing jägermeister stage <br>
approach 

现在我试图运行的演示代码是这样的:

import sys

import numpy
from nltk.cluster import KMeansClusterer, GAAClusterer, euclidean_distance
import nltk.corpus
from nltk import decorators
import nltk.stem

stemmer_func = nltk.stem.EnglishStemmer().stem
stopwords = set(nltk.corpus.stopwords.words('english'))

@decorators.memoize
def normalize_word(word):
    return stemmer_func(word.lower())

def get_words(titles):
    words = set()
    for title in job_titles:
        for word in title.split():
            words.add(normalize_word(word))
    return list(words)

@decorators.memoize
def vectorspaced(title):
    title_components = [normalize_word(word) for word in title.split()]
    return numpy.array([
        word in title_components and not word in stopwords
        for word in words], numpy.short)

if __name__ == '__main__':

    filename = 'example.txt'
    if len(sys.argv) == 2:
        filename = sys.argv[1]

    with open(filename) as title_file:

        job_titles = [line.strip() for line in title_file.readlines()]

        words = get_words(job_titles)

        # cluster = KMeansClusterer(5, euclidean_distance)
        cluster = GAAClusterer(5)
        cluster.cluster([vectorspaced(title) for title in job_titles if title])

        # NOTE: This is inefficient, cluster.classify should really just be
        # called when you are classifying previously unseen examples!
        classified_examples = [
                cluster.classify(vectorspaced(title)) for title in job_titles
            ]

        for cluster_id, title in sorted(zip(classified_examples, job_titles)):
            print cluster_id, title

(也可以在这里找到)

我收到的错误是这样的:

Traceback (most recent call last):
File "cluster_example.py", line 40, in
words = get_words(job_titles)
File "cluster_example.py", line 20, in get_words
words.add(normalize_word(word))
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/nltk/decorators.py", line 183, in memoize
result = func(*args)
File "cluster_example.py", line 14, in normalize_word
return stemmer_func(word.lower())
File "/usr/local/lib/python2.7/dist-packages/nltk/stem/snowball.py", line 694, in stem
word = (word.replace(u"\u2019", u"\x27")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 13: ordinal not in range(128)

这里发生了什么?

4

11 回答 11

140

该文件被读取为一堆strs,但它应该是unicodes。Python 尝试隐式转换,但失败了。改变:

job_titles = [line.strip() for line in title_file.readlines()]

strs 显式解码为unicode(此处假设为 UTF-8):

job_titles = [line.decode('utf-8').strip() for line in title_file.readlines()]

也可以通过导入模块并使用而不是内置codecs.codecs.openopen

于 2013-09-06T03:54:10.640 回答
64

这对我来说很好。

f = open(file_path, 'r+', encoding="utf-8")

您可以添加第三个参数encoding以确保编码类型为 'utf-8'

注意:此方法在 Python3 中运行良好,我在 Python2.7 中没有尝试过。

于 2018-03-09T01:18:52.623 回答
45

对我来说,终端编码存在问题。将 UTF-8 添加到 .bashrc 解决了这个问题:

export LC_CTYPE=en_US.UTF-8

之后不要忘记重新加载 .bashrc:

source ~/.bashrc
于 2018-03-06T09:42:22.027 回答
35

你也可以试试这个:

import sys
reload(sys)
sys.setdefaultencoding('utf8')
于 2017-07-06T14:09:44.443 回答
22

尝试在 Docker 容器中安装 python 包时出现此错误。对我来说,问题是 docker 镜像没有locale配置。将以下代码添加到 Dockerfile 为我解决了这个问题。

# Avoid ascii errors when reading files in Python
RUN apt-get install -y locales && locale-gen en_US.UTF-8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8'
于 2020-02-05T21:16:43.907 回答
16

在 Ubuntu 18.04 上使用Python3.6时,我已经解决了两个问题:

with open(filename, encoding="utf-8") as lines:

如果您将该工具作为命令行运行:

export LC_ALL=C.UTF-8

请注意,如果您使用的是Python2.7 ,则必须以不同的方式处理此问题。首先,您必须设置默认编码:

import sys
reload(sys)
sys.setdefaultencoding('utf-8')

然后加载您必须io.open用来设置编码的文件:

import io
with io.open(filename, 'r', encoding='utf-8') as lines:

您仍然需要导出环境

export LC_ALL=C.UTF-8
于 2019-05-13T09:03:14.467 回答
5

To find ANY and ALL unicode error related... Using the following command:

grep -r -P '[^\x00-\x7f]' /etc/apache2 /etc/letsencrypt /etc/nginx

Found mine in

/etc/letsencrypt/options-ssl-nginx.conf:        # The following CSP directives don't use default-src as 

Using shed, I found the offending sequence. It turned out to be an editor mistake.

00008099:     C2  194 302 11000010
00008100:     A0  160 240 10100000
00008101:  d  64  100 144 01100100
00008102:  e  65  101 145 01100101
00008103:  f  66  102 146 01100110
00008104:  a  61  097 141 01100001
00008105:  u  75  117 165 01110101
00008106:  l  6C  108 154 01101100
00008107:  t  74  116 164 01110100
00008108:  -  2D  045 055 00101101
00008109:  s  73  115 163 01110011
00008110:  r  72  114 162 01110010
00008111:  c  63  099 143 01100011
00008112:     C2  194 302 11000010
00008113:     A0  160 240 10100000
于 2018-08-26T13:06:41.460 回答
2

使用open(fn, 'rb').read().decode('utf-8')而不仅仅是open(fn).read()

于 2019-03-06T09:39:16.867 回答
1

job_titles您可以在使用字符串之前尝试此操作:

source = unicode(job_titles, 'utf-8')
于 2017-12-15T13:13:28.063 回答
0

python3x 或更高版本

  1. 在字节流中加载文件:
     body = ''
        for lines in open('website/index.html','rb'):
            decodedLine = lines.decode('utf-8')
            body = body+decodedLine.strip()
        return body
  1. 使用全局设置:
    import io
    import sys
    sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='utf-8')
于 2019-03-08T08:57:45.253 回答
0

对于 python 3,默认编码为“utf-8”。基本文档中建议执行以下步骤:https://docs.python.org/2/library/csv.html#csv-examples以防出现任何问题

  1. 创建一个函数

    def utf_8_encoder(unicode_csv_data):
        for line in unicode_csv_data:
            yield line.encode('utf-8')
    
  2. 然后使用阅读器内部的功能,例如

    csv_reader = csv.reader(utf_8_encoder(unicode_csv_data))
    
于 2018-04-12T21:07:51.717 回答