StepTNT StepTNT - 1 year ago 105
Java Question

"TokenStream contract violation: close() call missing" when calling addDocument

I'm using Lucene's features to build a simple way to match similar words within a text.

My idea is to have have an

running on my text to provide a
, and for each token I run a
to see if I have a match in my index. If not I just index a new
containing just the new unique word.

Here's what I'm getting tho:

Exception in thread "main" java.lang.IllegalStateException: TokenStream contract violation: close() call missing
at org.apache.lucene.analysis.Tokenizer.setReader(
at org.apache.lucene.analysis.Analyzer$TokenStreamComponents.setReader(
at org.apache.lucene.analysis.standard.StandardAnalyzer$1.setReader(
at org.apache.lucene.analysis.Analyzer.tokenStream(
at org.apache.lucene.document.Field.tokenStream(
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(
at org.apache.lucene.index.DefaultIndexingChain.processField(
at org.apache.lucene.index.DefaultIndexingChain.processDocument(
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(
at org.apache.lucene.index.DocumentsWriter.updateDocument(
at org.apache.lucene.index.IndexWriter.updateDocument(
at org.apache.lucene.index.IndexWriter.addDocument(
at org.myPackage.MyClass.addToIndex(

Relevant code here:

// Setup tokenStream based on StandardAnalyzer
TokenStream tokenStream = analyzer.tokenStream(TEXT_FIELD_NAME, new StringReader(input));
tokenStream = new StopFilter(tokenStream, EnglishAnalyzer.getDefaultStopSet());
tokenStream = new ShingleFilter(tokenStream, 3);
// Iterate and process each token from the stream
while (tokenStream.incrementToken()) {
CharTermAttribute charTerm = tokenStream.getAttribute(CharTermAttribute.class);
// Processing a word means looking for a similar one inside the index and, if not found, adding this one to the index
void processWord(String word) {
if (DirectoryReader.indexExists(index)) {
reader =;
IndexSearcher searcher = new IndexSearcher(reader);
TopDocs searchResults =, 1);
if (searchResults.totalHits > 0) {
Document foundDocument = searcher.doc(searchResults.scoreDocs[0].doc);
} else {
} else {
// Create a new Document to index the provided word
void addWordToIndex(String word) throws IOException {
Document newDocument = new Document();
newDocument.add(new TextField(TEXT_FIELD_NAME, new StringReader(word)));

The exception seems to tell that I should close the
before adding things to the index, but this doesn't really make sense to me because how are index and
related? I mean, index just receives a
containing a
, having the
coming from a
should be irrelevant.

Any hint on how to solve this?

Answer Source

The problem is in your reuse of the same analyzer that the IndexWriter is trying to use. You have a TokenStream open from that analyzer, and then you try to index a document. That document needs to be analyzed, but the analyzer finds it's old TokenStream is still open, and throws an exception.

To fix it, you could create a new, separate analyzer for processing and testing the string, instead of using the one that IndexWriter is using.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download