Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a HNSW collector that exits early when nearest neighbor queue saturates #14094

Merged
merged 51 commits into from
Apr 2, 2025

Conversation

tteofili
Copy link
Contributor

@tteofili tteofili commented Jan 2, 2025

This introduces a HnswKnnCollector interface, extending KnnCollector for HNSW, to make it possible to hook into HNSW execution for optimizations.
It then adds a new collector which uses a saturation-based threshold to dynamically halt HNSW graph exploration, in order to early exit when the exploration of new candidates is unlikely to lead to addition of new neighbors.
The new collector records the number of added neighbors upon exploration of a new candidate (a HNSW node) and it compares it with the number of neighbors added while exploring the previous candidate, when the rate of added neighbors plateaus for a number of consecutive iterations, it stops graph exploration (earlyTerminate returns true).

@tteofili
Copy link
Contributor Author

Screenshot 2025-01-15 at 14 40 16
this sample graph (from Cohere-768) shows how the collection of nearest neighbors saturates and hence it makes sense to stop visiting the graph "earlier", e.g., when the saturation counter exceeds a given threshold.

Comment on lines 20 to 24
public interface HnswKnnCollector extends KnnCollector {

/** Indicates exploration of the next HNSW candidate graph node. */
void nextCandidate();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this kind of collector is OK. But it makes most sense to me to be a delegate collector. An abstract collector to KnnCollector.Delegate.

Then, I also think that the OrdinalTranslatingKnnCollector should inherit directly from HnswKnnCollector always assuming that the passed in collector is a HnswKnnCollector.

Note, the default behavior for HnswKnnCollector#nextCandidate can simply be nothing, allowing for overriding.

This might require a new HnswGraphSearcher#search interface to keep the old collector actions, but it can be simple to add a new one that accepts a HnswKnnCollector and delegate to it with new HnswKnnCollector(KnnCollector delegate).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I adjusted my refactoring for the seeded queries similarly. It seems nicer IMO: #14170

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks Ben. I'll incorporate your suggestions once #14170 is in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made HnswKnnCollector a KnnCollector.Decorator in c6dbf7e

@tteofili
Copy link
Contributor Author

tteofili commented Mar 20, 2025

additional experiments with different quantization levels and filtering:

No-fitlering

Baseline

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.985        4.620  200000   100      50       64        250         no    106.46       1878.64             3          600.08       585.938      585.938       HNSW
 0.899        3.657  200000   100      50       64        250     7 bits     67.74       2952.47             5          746.34       733.185      147.247       HNSW
 0.585        2.328  200000   100      50       64        250     4 bits     46.86       4268.03             3          675.33       659.943       74.005       HNSW
 0.983        9.212  500000   100      50       64        250         no    235.68       2121.56             8         1501.44      1464.844     1464.844       HNSW
 0.900        7.562  500000   100      50       64        250     7 bits    165.99       3012.30             9         1867.29      1832.962      368.118       HNSW
 0.580        4.934  500000   100      50       64        250     4 bits    130.65       3826.96             8         1689.29      1649.857      185.013       HNSW

Candidate

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  selectivity  vec_disk(MB)  vec_RAM(MB)  indexType
 0.980        3.744  200000   100      50       64        250         no    10690    106.82       1872.29             3          600.10         1.00       585.938      585.938       HNSW
 0.896        3.473  200000   100      50       64        250     7 bits    11878     68.83       2905.54             5          746.39         1.00       733.185      147.247       HNSW
 0.585        2.032  200000   100      50       64        250     4 bits    13279     51.32       3897.12             3          675.32         1.00       659.943       74.005       HNSW
 0.982        8.549  500000   100      50       64        250         no    23079    248.29       2013.81             8         1501.32         1.00      1464.844     1464.844       HNSW
 0.898        6.733  500000   100      50       64        250     7 bits    23629    167.17       2991.02             9         1867.31         1.00      1832.962      368.118       HNSW
 0.581        3.776  500000   100      50       64        250     4 bits    21179    152.43       3280.24             5         1690.38         1.00      1649.857      185.013       HNSW

Filtering

Baseline

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  selectivity  vec_disk(MB)  vec_RAM(MB)  indexType
 1.000        0.642  200000   100      50       64        250         no     1965    109.81       1821.26             3          600.16         0.01       585.938      585.938       HNSW
 0.964        4.947  200000   100      50       64        250         no     9504    110.91       1803.33             3          600.11         0.10       585.938      585.938       HNSW
 0.983        8.417  200000   100      50       64        250         no    22193    103.13       1939.28             3          600.09         0.50       585.938      585.938       HNSW
 0.918        0.762  200000   100      50       64        250     7 bits     1981     64.33       3108.82             5          746.33         0.01       733.185      147.247       HNSW
 0.892        4.310  200000   100      50       64        250     7 bits    10302     66.23       3019.87             5          746.34         0.10       733.185      147.247       HNSW
 0.898        6.900  200000   100      50       64        250     7 bits    23394     69.09       2894.82             4          746.51         0.50       733.185      147.247       HNSW
 0.660        1.137  200000   100      50       64        250     4 bits     1695     50.01       3999.44             3          675.40         0.01       659.943       74.005       HNSW
 0.619        2.852  200000   100      50       64        250     4 bits    11021     49.88       4010.03             3          675.31         0.10       659.943       74.005       HNSW
 0.592        4.429  200000   100      50       64        250     4 bits    27121     48.72       4104.75             3          675.30         0.50       659.943       74.005       HNSW
 1.000        2.371  500000   100      50       64        250         no     5017    244.18       2047.64             8         1501.36         0.01      1464.844     1464.844       HNSW
 0.968       11.976  500000   100      50       64        250         no    21270    266.14       1878.73             8         1501.19         0.10      1464.844     1464.844       HNSW
 0.987       17.191  500000   100      50       64        250         no    44939    239.83       2084.78             8         1501.26         0.50      1464.844     1464.844       HNSW
 0.913        2.024  500000   100      50       64        250     7 bits     5075    166.55       3002.17             9         1867.19         0.01      1832.962      368.118       HNSW
 0.891       10.079  500000   100      50       64        250     7 bits    21671    168.88       2960.73             9         1867.41         0.10      1832.962      368.118       HNSW
 0.899       13.733  500000   100      50       64        250     7 bits    47517    168.22       2972.25             9         1867.22         0.50      1832.962      368.118       HNSW
 0.660        1.183  500000   100      50       64        250     4 bits     5085    153.22       3263.30             5         1690.35         0.01      1649.857      185.013       HNSW
 0.598        8.365  500000   100      50       64        250     4 bits    23514    137.45       3637.69             8         1689.26         0.10      1649.857      185.013       HNSW
 0.588        9.584  500000   100      50       64        250     4 bits    48507    137.44       3638.00             8         1689.32         0.50      1649.857      185.013       HNSW

Candidate

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  selectivity  vec_disk(MB)  vec_RAM(MB)  indexType
 1.000        0.618  200000   100      50       64        250         no     1685    105.74       1891.47             3          600.11         0.01       585.938      585.938       HNSW
 0.955        4.211  200000   100      50       64        250         no     8446    104.30       1917.60             3          600.09         0.10       585.938      585.938       HNSW
 0.970        6.499  200000   100      50       64        250         no    17121    106.95       1869.98             3          600.11         0.50       585.938      585.938       HNSW
 0.918        0.813  200000   100      50       64        250     7 bits     2047     69.00       2898.68             5          746.34         0.01       733.185      147.247       HNSW
 0.883        4.271  200000   100      50       64        250     7 bits     8909     70.60       2832.98             4          746.46         0.10       733.185      147.247       HNSW
 0.893        6.104  200000   100      50       64        250     7 bits    21460     69.16       2891.72             5          746.39         0.50       733.185      147.247       HNSW
 0.684        0.763  200000   100      50       64        250     4 bits     1969     49.21       4064.54             3          675.34         0.01       659.943       74.005       HNSW
 0.613        2.752  200000   100      50       64        250     4 bits     9832     50.25       3979.78             3          675.31         0.10       659.943       74.005       HNSW
 0.592        3.430  200000   100      50       64        250     4 bits    20823     48.60       4115.06             3          675.33         0.50       659.943       74.005       HNSW
 1.000        2.346  500000   100      50       64        250         no     4996    243.49       2053.51             8         1501.29         0.01      1464.844     1464.844       HNSW
 0.964       11.287  500000   100      50       64        250         no    19991    243.30       2055.08             8         1501.34         0.10      1464.844     1464.844       HNSW
 0.984       15.180  500000   100      50       64        250         no    39049    245.65       2035.38             8         1501.41         0.50      1464.844     1464.844       HNSW
 0.894        2.064  500000   100      50       64        250     7 bits     4615    175.74       2845.05             9         1867.25         0.01      1832.962      368.118       HNSW
 0.889        9.321  500000   100      50       64        250     7 bits    20292    176.89       2826.68             9         1867.15         0.10      1832.962      368.118       HNSW
 0.898       13.142  500000   100      50       64        250     7 bits    43073    167.55       2984.20             9         1867.34         0.50      1832.962      368.118       HNSW
 0.654        1.819  500000   100      50       64        250     4 bits     5024    151.40       3302.55             5         1690.48         0.01      1649.857      185.013       HNSW
 0.598        5.857  500000   100      50       64        250     4 bits    19382    155.89       3207.37             5         1690.41         0.10      1649.857      185.013       HNSW
 0.588        5.437  500000   100      50       64        250     4 bits    29505    150.84       3314.77             5         1690.41         0.50      1649.857      185.013       HNSW

the results are mostly good. I might see if I can improve the behavior with very selective filters (0.01 case).

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for running those benchmarks. I think all the numbers look good.

My final concerns/questions are around the API.

Two ideas:

  • can we make the API more general? Seems like it could be generally useful. Maybe we kick the can here...
  • If we cannot make the API more general, or don't see the value in ever doing that, can we utilize a search strategy instead?

}

@Override
public void nextCandidate() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tteofili what do you think of making this more general? I think having a "nextCandidate" or "nextBlockOfVectors" is generally useful, and might be applicable to all types of kNN indices.

For example:

  • Flat, you just get called once, indicating you are searching ALL vectors
  • HNSW, you get called for each NSW (or in the case of filtered search, extended NSW)
  • IVF, you get called for each posting list
  • Vamana, you get called for each node before calling the neighbors

Do you think we can make this API general?

Maybe not, I am not sure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this idea Ben, I'll see if I can make up something reasonable for that ;)

*
* @lucene.experimental
*/
public abstract class HnswKnnCollector extends KnnCollector.Decorator {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it is a little frustrating as we already have an "HNSWStrategy" and now we have an "HNSWCollector".

Could we utilize an HNSWStrategy? Or make nextCandidate a more general API?

My thought on the strategy would be that the graph searcher to indicate through the strategy object when the next group of vectors will be searched and the strategy would have a reference to the collector to which it can forward the request.

Of course, this still requires a new HnswQueueSaturationCollector, but it won't require these new base classes.

Copy link
Contributor Author

@tteofili tteofili Apr 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've spent some time trying to refactor this and extract a wider nextVectorsBlock API, but it sounds like conflating too much into this PR, so I'd opt to "only" get rid of the HnswKnnCollector class and rely on the strategy.

Copy link
Contributor Author

@tteofili tteofili Apr 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as a first step I've dropped HnswKnnCollector in favor of adding the nextVectorsBlock API to KnnCollector.Decorator.

@tteofili
Copy link
Contributor Author

tteofili commented Apr 1, 2025

@benwtrent I've reworked the design exposing KnnSearchStrategy#nextVectorsBlock and PatienceKnnVectorQuery leverages a Patience strategy that calls the HnswQueueSaturationCollector#nextCandidate (which is not a generic API anymore) on nextVectorsBlock.
hopefully this is a bit cleaner now.

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is OK for now. We really need to clean up these internal APIs, they are getting out of hand :)

@@ -139,6 +139,8 @@ New Features

* GITHUB#14412: Allow skip cache factor to be updated dynamically. (Sagar Upadhyaya)

* GITHUB#14094: New KNN query that early terminates when HNSW nearest neighbor queue saturates. (Tommaso Teofili)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.2 is cut, so this will now be a 10.3 thing :/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless this is being added to 10.2

@tteofili tteofili merged commit 525bf34 into apache:main Apr 2, 2025
7 checks passed
tteofili added a commit that referenced this pull request Apr 2, 2025
…urates (#14094)

* Add a HNSW early termination based on nearest neighbor queue saturation

Co-authored-by: Benjamin Trent <[email protected]>
(cherry picked from commit 525bf34)
tteofili added a commit that referenced this pull request Apr 2, 2025
…urates (#14094)

* Add a HNSW early termination based on nearest neighbor queue saturation

Co-authored-by: Benjamin Trent <[email protected]>
(cherry picked from commit 525bf34)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants