Richard Rublev Richard Rublev - 5 months ago 18
Python Question

Tokenizer mess with french and portugues

I have worked with these two examples

>>> french_tokenizer=nltk.data.load('tokenizers/punkt/french.pickle')
>>> french_tokenizer.tokenize('Deux agressions en quelques jours,voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp')
['Deux agressions en quelques jours,voil\xc3\xa0 ce qui a motiv\xc3\xa9 hier matin le d\xc3\xa9brayage coll\xc3\xa8ge franco-britanique deLp']
>>> port_tokenizer=nltk.data.load('tokenizers/punkt/portuguese.pickle')
>>> port_tokenizer.tokenize('Seguranças dos aeroportos começam greve de cinco dias no sábado')
['Seguran\xc3\xa7as dos aeroportos come\xc3\xa7am greve de cinco dias no s\xc3\xa1bado']


The first one is french,the second portugues.Why do I have all these problems?The first one is encountered at

voilà

Answer Source

When typing unicode in command-line for Python2.7, it's good to use u'...':

>>> from nltk.tokenize import PunktSentenceTokenizer
>>> frtokenizer = PunktSentenceTokenizer('french')
>>> pttokenizer = PunktSentenceTokenizer('portuguese')
>>> s = u'Deux agressions en quelques jours,voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp'

>>> frtokenizer.tokenize(s)
[u'Deux agressions en quelques jours,voil\xe0 ce qui a motiv\xe9 hier matin le d\xe9brayage coll\xe8ge franco-britanique deLp']
>>> for sentence in frtokenizer.tokenize(s):
...     print sentence
... 
Deux agressions en quelques jours,voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp

To get word tokens, use word_token:

>>> from nltk import word_tokenize
>>> [word_tokenize(sent) for sent in frtokenizer.tokenize(s)]
[[u'Deux', u'agressions', u'en', u'quelques', u'jours', u',', u'voil\xe0', u'ce', u'qui', u'a', u'motiv\xe9', u'hier', u'matin', u'le', u'd\xe9brayage', u'coll\xe8ge', u'franco-britanique', u'deLp']]
>>> for sentence in frtokenizer.tokenize(s):
...     print word_tokenize(sentence)
... 
[u'Deux', u'agressions', u'en', u'quelques', u'jours', u',', u'voil\xe0', u'ce', u'qui', u'a', u'motiv\xe9', u'hier', u'matin', u'le', u'd\xe9brayage', u'coll\xe8ge', u'franco-britanique', u'deLp']

To get the string output rather than list of string:

>>> for sentence in frtokenizer.tokenize(s):
...     print ' '.join(word_tokenize(sentence))
... 
Deux agressions en quelques jours , voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp

When reading a unicode file in Python2.7:

$ cat somefrench.txt 
Deux agressions en quelques jours,voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp

liling.tan@ASRHQ289:~$ python

>>> import io
>>> with io.open('somefrench.txt', 'r', encoding='utf8') as fin:
...     for line in fin:
...         print line
... 
Deux agressions en quelques jours,voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp

And to use the word_tokenize and PunktSentenceTokenizer:

>>> import io
>>> from nltk import word_tokenize
>>> from nltk.tokenize import PunktSentenceTokenizer
>>> frtokenizer = PunktSentenceTokenizer('french')
>>> with io.open('somefrench.txt', 'r', encoding='utf8') as fin:
...     for line in file:
...         for sent in frtokenizer.tokenize(line): # sentences
...             print ' '.join(word_tokenize(sent))
... 
Deux agressions en quelques jours , voilà ce qui a motivé hier matin le débrayage collège franco-britanique deLp

For portuguese:

>>> s = u'Seguranças dos aeroportos começam greve de cinco dias no sábado'
>>> pttokenizer = PunktSentenceTokenizer('portuguese')
>>> for sent in pttokenizer.tokenize(s):
...     print ' '.join(word_tokenize(sent))
... 
Seguranças dos aeroportos começam greve de cinco dias no sábado