rakesh rakesh - 2 months ago 6x
Python Question

Recursion Error: Maximum Recursion depth exceeded

from __future__ import print_function
import os, codecs, nltk.stem

english_stemmer = nltk.stem.SnowballStemmer('english')
for root, dirs, files in os.walk("/Users/Documents/corpus/source-document/test1"):
for file in files:
if file.endswith(".txt"):
posts = codecs.open(os.path.join(root,file),"r", "utf-8-sig")
from sklearn.feature_extraction.text import CountVectorizer
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer = super(StemmedCountVectorizer, self.build_analyzer())
return lambda doc: (english_stemmer.stem(w) for w in analyzer(doc))

vectorizer = StemmedCountVectorizer(min_df = 1, stop_words = 'english')
X_train = vectorizer.fit_transform(posts)
num_samples, num_features = X_train.shape
print("#samples: %d, #features: %d" % (num_samples, num_features)) #samples: 5, #features: 25

When I run the above code for all the text file contained in the directory it is throwing the following error:
RecursionError: maximum recursion depth exceeded.

I tried to resolve the problem with sys.setrecursionlimit, but all in vain. When i provide large value like 20000 the the kernel crash error occurs.


Your error is in analyzer = super(StemmedCountVectorizer, self.build_analyzer()) here you are calling the function build_analyzer before the super call, which cause a infinite recursive loop. Change it for analyzer = super(StemmedCountVectorizer, self).build_analyzer()