Skip to content

UnsupportedOperation when merging Lucene90BlockTreeTermsWriter #14429

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
benwtrent opened this issue Apr 2, 2025 · 3 comments
Open

UnsupportedOperation when merging Lucene90BlockTreeTermsWriter #14429

benwtrent opened this issue Apr 2, 2025 · 3 comments
Labels

Comments

@benwtrent
Copy link
Member

benwtrent commented Apr 2, 2025

Description

Found this in the wild. I haven't been able to replicate :(

I don't even know what it means to hit this fst.outputs.merge branch and under what conditions is it valid/invalid. Any pointers here would be useful.

We ran into a strange postings merge error in production.

The FST compiler reaches the "merge" line when merging some segments:

if (lastInput.length() == input.length && prefixLenPlus1 == 1 + input.length) {
// same input more than 1 time in a row, mapping to
// multiple outputs
lastNode.output = fst.outputs.merge(lastNode.output, output);

However, the "outputs" provided by Lucene90BlockTreeTermsWriter is ByteSequenceOutputs, which does not override merge, and thus throws an unsupported operation exception.

final ByteSequenceOutputs outputs = ByteSequenceOutputs.getSingleton();
final int fstVersion;
if (version >= Lucene90BlockTreeTermsReader.VERSION_CURRENT) {
fstVersion = FST.VERSION_CURRENT;
} else {
fstVersion = FST.VERSION_90;
}
final FSTCompiler<BytesRef> fstCompiler =
new FSTCompiler.Builder<>(FST.INPUT_TYPE.BYTE1, outputs)
// Disable suffixes sharing for block tree index because suffixes are mostly dropped
// from the FST index and left in the term blocks.
.suffixRAMLimitMB(0d)
.dataOutput(getOnHeapReaderWriter(pageBits))
.setVersion(fstVersion)
.build();

Given this, it seems like it should be "impossible" to reach the "Outputs.merge" path when merging with the Lucene90BlockTreeTermsWriter, but somehow it did.

Any ideas on where I should look?

at org.apache.lucene.util.fst.Outputs.merge(Outputs.java:95) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.util.fst.FSTCompiler.add(FSTCompiler.java:936) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter$PendingBlock.append(Lucene90BlockTreeTermsWriter.java:593) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter$PendingBlock.compileIndex(Lucene90BlockTreeTermsWriter.java:562) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter$TermsWriter.writeBlocks(Lucene90BlockTreeTermsWriter.java:776) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter$TermsWriter.finish(Lucene90BlockTreeTermsWriter.java:1163) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.lucene90.blocktree.Lucene90BlockTreeTermsWriter.write(Lucene90BlockTreeTermsWriter.java:402) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:95) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:204) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:211) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.SegmentMerger.mergeWithLogging(SegmentMerger.java:300) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:139) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5293) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4761) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:6582) ~[lucene-core-9.11.1.jar:?]
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:660) ~[lucene-core-9.11.1.jar:?]
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:134) ~[elasticsearch-8.15.0.jar:?]
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:721) ~[lucene-core-9.11.1.jar:?]```

### Version and environment details

Lucene 9.11.1
@mikemccand
Copy link
Member

Phew, this is a spooky exception!

I think it means that the same term was fed to the FST Builder twice in row. FST Builder in general can support this case, and it means that a single output can have multiple outputs, and the Outputs impl is supposed to be able to combine multiple outputs into a set (internally). But you're right: in this context (BlockTree) there should never be the same term added more than once, and each term has a single output, and the Outputs impl does not support it. It is indeed NOT supposed to happen!

BlockTree is confusing in how it builds up its blocks. It does it one sub-tree at a time, using intermediate FSTs to hold each sub-tree, and then regurgitating the terms from each subtree with FSTTermsEnum, adding them into a bigger FST Builder to combine multiple sub-trees into a single FST. It keeps doing this up and up the terms trie until it gets to empty string and then that FST is the terms index.

So .... somehow this regurgitation process added the same term twice in a row. This means either a given FSTTermsEnum returned the same term twice in a row, or, somehow a term was duplicated at the boundary (where one FSTTermsEnum ended from a sub-block, and next FSTTermsEnum began).

Do we know any fun details about the use case? Maybe an exotic/old JVM? Massive numbers of terms...? Or the terms are some crazy binary gene sequences or something?

@benwtrent
Copy link
Member Author

Thank you @mikemccand for some details!

Do we know any fun details about the use case? Maybe an exotic/old JVM? Massive numbers of terms...? Or the terms are some crazy binary gene sequences or something?

I will see what I can find.

@benwtrent
Copy link
Member Author

@mikemccand OK, I gathered more info:

  • Modern OpenJDK (22.0.1)
  • Modern Linux

So other system stuff doesn't seem very exotic.

However, the data being ingested might have various pieces of turkish unicode. Digging around the analyzers, I didn't find any special handling, so its all using the StandardAnalyzer with no additional normalization.

I wonder if we are just hitting the dreaded turkish "i" unicode issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants