Zach Zach - 2 months ago 16x
Python Question

Python: UnicodeDecodeError: 'utf8' codec can't decode byte

I'm reading a bunch of RTF files into python strings.
On SOME texts, I get this error:

Traceback (most recent call last):
File "", line 47, in <module>
X = vectorizer.fit_transform(texts)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\", line
716, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\", line
398, in fit_transform
term_count_current = Counter(analyze(doc))
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\", line
313, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\", line
224, in decode
doc = doc.decode(self.charset, self.charset_error)
File "C:\Python27\lib\encodings\", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 462: invalid
start byte

I've tried:

  1. Copying and pasting the text of the files to new files

  2. saving the rtf files as txt files

  3. Openin the txt files in Notepad++ and choosing 'convert to utf-8' and also setting the encoding to utf-8

  4. Opening the files with Microsoft Word and saving them as new files

Nothing works. Any ideas?

It's probably not related, but here's the code incase you are wondering:

f = open(dir+location, "r")
doc =
t = PlaintextWriter.write(doc).getvalue()
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')
X = vectorizer.fit_transform(texts)


as I said on the mailinglist, it is probably easiest to use the charset_error option and set it to ignore. If the file is actually utf-16, you can also set the charset to utf-16 in the Vectorizer. See the docs.