19

我正在尝试学习如何使用 NLTK 标记西班牙语单词。

nltk book中,使用他们的示例标记英语单词非常容易。因为我是 nltk 和所有语言处理的新手,所以我对如何继续感到很困惑。

我已经下载了cess_esp语料库。有没有办法在nltk.pos_tag. 我查看了pos_tag文档,并没有看到任何暗示我可以的东西。我觉得我错过了一些关键概念。我是否必须针对 cess_esp 语料库手动标记文本中的单词?(手动我的意思是标记我的句子并在语料库中运行它)或者我完全偏离了标记。谢谢

4

4 回答 4

19

首先,您需要从语料库中读取标记的句子。NLTK 提供了一个很好的界面,不用担心来自不同语料库的不同格式;您可以简单地导入语料库,使用语料库对象函数来访问数据。请参阅http://nltk.googlecode.com/svn/trunk/nltk_data/index.xml

然后你必须选择你选择的标注器并训练标注器。还有更多花哨的选项,但您可以从 N-gram 标记器开始。

然后你可以使用标注器来标注你想要的句子。这是一个示例代码:

from nltk.corpus import cess_esp as cess
from nltk import UnigramTagger as ut
from nltk import BigramTagger as bt

# Read the corpus into a list, 
# each entry in the list is one sentence.
cess_sents = cess.tagged_sents()

# Train the unigram tagger
uni_tag = ut(cess_sents)

sentence = "Hola , esta foo bar ."

# Tagger reads a list of tokens.
uni_tag.tag(sentence.split(" "))

# Split corpus into training and testing set.
train = int(len(cess_sents)*90/100) # 90%

# Train a bigram tagger with only training data.
bi_tag = bt(cess_sents[:train])

# Evaluates on testing data remaining 10%
bi_tag.evaluate(cess_sents[train+1:])

# Using the tagger.
bi_tag.tag(sentence.split(" "))

在大型语料库上训练标注器可能需要很长时间。不是每次需要时都训练一个标注器,而是方便地将训练过的标注器保存在一个文件中以供以后重用。

请查看http://nltk.googlecode.com/svn/trunk/doc/book/ch05.html中的存储标签部分

于 2013-02-07T02:12:06.117 回答
7

鉴于上一个答案中的教程,这里有一个来自意大利面条标记器的更面向对象的方法:https ://github.com/alvations/spaghetti-tagger

#-*- coding: utf8 -*-

from nltk import UnigramTagger as ut
from nltk import BigramTagger as bt
from cPickle import dump,load

def loadtagger(taggerfilename):
    infile = open(taggerfilename,'rb')
    tagger = load(infile); infile.close()
    return tagger

def traintag(corpusname, corpus):
    # Function to save tagger.
    def savetagger(tagfilename,tagger):
        outfile = open(tagfilename, 'wb')
        dump(tagger,outfile,-1); outfile.close()
        return
    # Training UnigramTagger.
    uni_tag = ut(corpus)
    savetagger(corpusname+'_unigram.tagger',uni_tag)
    # Training BigramTagger.
    bi_tag = bt(corpus)
    savetagger(corpusname+'_bigram.tagger',bi_tag)
    print "Tagger trained with",corpusname,"using" +\
                "UnigramTagger and BigramTagger."
    return

# Function to unchunk corpus.
def unchunk(corpus):
    nomwe_corpus = []
    for i in corpus:
        nomwe = " ".join([j[0].replace("_"," ") for j in i])
        nomwe_corpus.append(nomwe.split())
    return nomwe_corpus

class cesstag():
    def __init__(self,mwe=True):
        self.mwe = mwe
        # Train tagger if it's used for the first time.
        try:
            loadtagger('cess_unigram.tagger').tag(['estoy'])
            loadtagger('cess_bigram.tagger').tag(['estoy'])
        except IOError:
            print "*** First-time use of cess tagger ***"
            print "Training tagger ..."
            from nltk.corpus import cess_esp as cess
            cess_sents = cess.tagged_sents()
            traintag('cess',cess_sents)
            # Trains the tagger with no MWE.
            cess_nomwe = unchunk(cess.tagged_sents())
            tagged_cess_nomwe = batch_pos_tag(cess_nomwe)
            traintag('cess_nomwe',tagged_cess_nomwe)
            print
        # Load tagger.
        if self.mwe == True:
            self.uni = loadtagger('cess_unigram.tagger')
            self.bi = loadtagger('cess_bigram.tagger')
        elif self.mwe == False:
            self.uni = loadtagger('cess_nomwe_unigram.tagger')
            self.bi = loadtagger('cess_nomwe_bigram.tagger')

def pos_tag(tokens, mmwe=True):
    tagger = cesstag(mmwe)
    return tagger.uni.tag(tokens)

def batch_pos_tag(sentences, mmwe=True):
    tagger = cesstag(mmwe)
    return tagger.uni.batch_tag(sentences)

tagger = cesstag()
print tagger.uni.tag('Mi colega me ayuda a programar cosas .'.split())
于 2013-12-28T17:04:43.983 回答
6

我最终在这里寻找其他语言的词性标注器,然后是英语。您的问题的另一个选择是使用 Spacy 库。它为多种语言提供 POS 标记,例如荷兰语、德语、法语、葡萄牙语、西班牙语、挪威语、意大利语、希腊语和立陶宛语。

来自 Spacy 文档:

import es_core_news_sm
nlp = es_core_news_sm.load()
doc = nlp("El copal se usa principalmente para sahumar en distintas ocasiones como lo son las fiestas religiosas.")
print([(w.text, w.pos_) for w in doc])

导致:

[('El', 'DET'), ('copal', 'NOUN'), ('se', 'PRON'), ('usa', 'VERB'), ('principalmente', 'ADV') , ('para', 'ADP'), ('sahumar', 'VERB'), ('en', 'ADP'), ('distintas', 'DET'), ('ocasiones', 'NOUN') , ('como', 'SCONJ'), ('lo', 'PRON'), ('son', 'AUX'), ('las', 'DET'), ('fiestas', 'NOUN') , ('religiosas', 'ADJ'), ('.', 'PUNCT')]

并在笔记本中可视化:

displacy.render(doc, style='dep', jupyter = True, options = {'distance': 120})

在此处输入图像描述

于 2020-02-18T11:46:13.243 回答
1

以下脚本为您提供了一种快速获取西班牙语句子中“词袋”的方法。请注意,如果你想正确地做到这一点,你必须在标记之前标记句子,所以'religiosas'。必须用两个标记“religiosas”、“。”分隔。

#-*- coding: utf8 -*-

# about the tagger: http://nlp.stanford.edu/software/tagger.shtml 
# about the tagset: nlp.lsi.upc.edu/freeling/doc/tagsets/tagset-es.html

import nltk

from nltk.tag.stanford import POSTagger

spanish_postagger = POSTagger('models/spanish.tagger', 'stanford-postagger.jar', encoding='utf8')

sentences = ['El copal se usa principalmente para sahumar en distintas ocasiones como lo son las fiestas religiosas.','Las flores, hojas y frutos se usan para aliviar la tos y también se emplea como sedante.']

for sent in sentences:

    words = sent.split()
    tagged_words = spanish_postagger.tag(words)

    nouns = []

    for (word, tag) in tagged_words:

        print(word+' '+tag).encode('utf8')
        if isNoun(tag): nouns.append(word)

    print(nouns)

给出:

El da0000
copal nc0s000
se p0000000
usa vmip000
principalmente rg
para sp000
sahumar vmn0000
en sp000
distintas di0000
ocasiones nc0p000
como cs
lo pp000000
son vsip000
las da0000
fiestas nc0p000
religiosas. np00000
[u'copal', u'ocasiones', u'fiestas', u'religiosas.']
Las da0000
flores, np00000
hojas nc0p000
y cc
frutos nc0p000
se p0000000
usan vmip000
para sp000
aliviar vmn0000
la da0000
tos nc0s000
y cc
también rg
se p0000000
emplea vmip000
como cs
sedante. nc0s000
[u'flores,', u'hojas', u'frutos', u'tos', u'sedante.']
于 2015-02-03T22:06:30.580 回答