0

我是新的python程序员。我编写了一个简单的脚本,它执行以下操作:

  • 向用户询问 url
  • 读取 url (urlopen(url).read())
  • 标记上述命令的结果

我将标记化的结果放在两个文件中。一个有拉丁字符的单词(英语、西班牙语等),另一个有其余的(希腊单词等)。

问题是,当我打开一个希腊语 url 时,我会从中获取希腊语,但我将其视为字符序列,而不是单词(就像在拉丁语中发生的情况一样)。

我希望得到一个单词列表 ( μαρια, γιωργος, παιδι) (项目数 3),但我得到的是('μ','α','ρ','ι', 'α'........)与字母一样多的项目数

我应该怎么办?(编码为utf-8)

遵循代码:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

#Importing useful libraries 
#NOTE: Nltk should be installed first!!!
import nltk
import urllib #mporei na einai kai urllib
import re
import lxml.html.clean
import unicodedata
from urllib import urlopen

http = "http://"
www = "www."
#pattern = r'[^\a-z0-9]'

#Demand url from the user
url=str(raw_input("Please, give a url and then press ENTER: \n"))


#Construct a valid url syntax
if (url.startswith("http://"))==False:
    if(url.startswith("www"))==False:
        msg=str(raw_input("Does it need 'www'? Y/N \n"))
        if (msg=='Y') | (msg=='y'):
            url=http+www+url
        elif (msg=='N') | (msg=='n'):
            url=http+url
        else:
            print "You should type 'y' or 'n'"
    else:
        url=http+url

latin_file = open("Latin_words.txt", "w")
greek_file = open("Other_chars.txt", "w")
latin_file.write(url + '\n')
latin_file.write("The latin words of the above url are the following:" + '\n')
greek_file.write("Οι ελληνικές λέξεις καθώς και απροσδιόριστοι χαρακτήρες")

#Reading the given url

raw=urllib.urlopen(url).read()

#Retrieve the html body from the url. Clean it from html special characters
pure = nltk.clean_html(raw)
text = pure

#Retrieve the words (tokens) of the html body in a list
tokens = nltk.word_tokenize(text)

counter=0
greeks=0
for i in tokens:
    if re.search('[^a-zA-Z]', i):
        #greeks+=1
        greek_file.write(i)
    else:
        if len(i)>=4:
            print i
            counter+=1
            latin_file.write(i + '\n')
        else:
            del i


#Print the number of words that I shall take as a result
print "The number of latin tokens is: %d" %counter

latin_file.write("The number of latin tokens is: %d and the number of other characters is: %d" %(counter, greeks))
latin_file.close()
greek_file.close()

我在很多方面检查了它,据我所知,该程序只能识别希腊字符但无法识别希腊单词,意思是我们分隔单词的空格!

如果我在终端中输入带有空格的希腊语句子,它会正确显示。当我阅读某些内容(例如 html 页面中的正文)时会出现问题

此外,在 text_file.write(i) 中,关于希腊语 i,如果我写 text_file.write(i+ '\n'),结果是无法识别的字符,也就是我失去了我的编码!

关于上述任何想法?

4

3 回答 3

0

Pythonre模块因其对 unicode 的弱支持而臭名昭著。对于严肃的 unicode 工作,请考虑替代regex 模块,它完全支持 unicode 脚本和属性。例子:

text = u"""
Some latin words, for example: cat niño määh fuß
Οι ελληνικές λέξεις καθώς και απροσδιόριστοι χαρακτήρες
"""

import regex

latin_words = regex.findall(ur'\p{Latin}+', text)
greek_words = regex.findall(ur'\p{Greek}+', text)
于 2012-09-27T08:07:20.523 回答
0

这是您的代码的简化版本,使用优秀的requests来获取 URL,自动关闭文件的with语句io并帮助使用 utf8。

import io
import nltk
import requests
import string

url = raw_input("Please, give a url and then press ENTER: \n")
if not url.startswith('http://'):
   url = 'http://'+url
page_text = requests.get(url).text
tokens = nltk.word_tokenize(page_text)

latin_words = [w for w in tokens if w.isalpha()]
greek_words = [w for w in tokens if w not in latin_words]

print 'The number of latin tokens is {0}'.format(len(latin_words))

with (io.open('latin_words.txt','w',encoding='utf8') as latin_file,
      io.open('greek_words.txt','w',encoding='utf8') as greek_file):

    greek_file.writelines(greek_words)
    latin_file.writelines(latin_words)

    latin_file.write('The number of latin words is {0} and the number of others {1}\n'.format(len(latin_words),len(greek_words))

我简化了检查 URL 的部分;这样无效的 URL 将不会被读取。

于 2012-09-27T08:08:21.113 回答
0

在这里,我认为您正在寻找子字符串而不是字符串if re.search('[^a-zA-Z]', i) ,您可以通过循环列表从列表中获取单词token

于 2012-09-27T07:45:48.797 回答