-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split segment by search type #2273
base: main
Are you sure you want to change the base?
Split segment by search type #2273
Conversation
if (isShardLevelRescoringEnabled == true) { | ||
ResultUtil.reduceToTopK(perLeafResults, firstPassK); | ||
} | ||
|
||
StopWatch stopWatch = new StopWatch().start(); | ||
perLeafResults = doRescore(indexSearcher, leafReaderContexts, knnWeight, perLeafResults, finalK); | ||
perLeafResults = doRescore(indexSearcher, knnWeight, perLeafResults, finalK); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously, we did exact search on finalK. After this change, we still does exact search on finalK. Could you tell me how will this improve the latency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doSearch can call either ApproxSearch or Exact Search based on conditions like whether engine files exists or not, number of docs after filter is less than k. In those cases, we will quantize query vector, and every vector from segments, and, then perform distance computation using Hamming distance for firstPassK. With this approach, we only call doSearch for those segments which we know will always call approxsearch, and, for other segments we will call exact search without quantization with finalK. The optimization is at https://github.com/opensearch-project/k-NN/pull/2273/files#diff-9cfe412357ba56b3ef216427d491fc653535686a760e8ba19ea1aa00fc0e0338R68-R78
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you assuming that an exact search on full precision vectors will be faster than an exact search with quantized vectors due to the slower quantization process? It would be interesting to see the benchmark results for this.
If that’s the case, an alternative could be to retrieve quantized values directly from the Faiss file instead of performing on-the-fly quantization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exact search on full precision for k is less than, exact search on quantization for first pass K + rescore matched docs on full precision . The linked GitHub issues actually shows how performance got impacted 10x when there are segments with no faiss engine files. In my POC, I saw improvements but recall was poor because of using order as link between results and leaf reader context. I am rerunning experiments with my change to collect metrics with latency and recall
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is one case where we are running exact search; when the returned result is less than k. Are we going to handle that case as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this happens with filter, if so, yes, it was already taken care
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. It happens regardless there is filter or not. https://github.com/opensearch-project/k-NN/blob/main/src/main/java/org/opensearch/knn/index/query/KNNWeight.java#L149
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. It can happen either when there are no engine files or after filter that number of matched documents is less than k. We only decided to call doSearch if we know that it will call Approx Search API. For other segments we will directly call Exact Search, this PR is about that only https://github.com/opensearch-project/k-NN/pull/2273/files#diff-9cfe412357ba56b3ef216427d491fc653535686a760e8ba19ea1aa00fc0e0338R72-R75
235460f
to
229e788
Compare
We map order of results to order of segments, and finally rely on that order to build top docs. Refactor method to use map.Entry to map leafreader context with results from those leaves. This is required when we want to split segments based on approx search or exact search to reduce rescoring twice by exact search Signed-off-by: Vijayan Balasubramanian <[email protected]>
For exact search, it is not required to perform qunatization during rescore with oversamples. However, to avoid normalization between segments from approx search and exact search, we will first identify segments that needs approxsearch and will perform oversamples and, at end, after rescore, we will add scores from segments that will perform exact search. Signed-off-by: Vijayan Balasubramanian <[email protected]>
229e788
to
b149309
Compare
Should we skip quantization on indexing if we are not using it here then? |
+1. I think it is valid point. We should do that too. But @VijayanB there were couple of more ideas that we were discussing on how to fix this issue. Did you put some thoughts on that like why we should split the segments by search type? |
/** | ||
* For given {@link LeafReaderContext}, this api will return will KNNWeight perform exact search or not | ||
* always. This decision is based on two properties, 1) if there are no native engine files in segments, | ||
* exact search will always be performed, 2) if number of docs after filter is less than 'k' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this java doc is incorrect. We have bit more logic around this. So lets fix this java doc.
@@ -30,14 +31,15 @@ public final class ResultUtil { | |||
* @param perLeafResults Results from the list | |||
* @param k the number of results across all leaf results to return | |||
*/ | |||
public static void reduceToTopK(List<Map<Integer, Float>> perLeafResults, int k) { | |||
public static void reduceToTopK(List<Map.Entry<LeafReaderContext, Map<Integer, Float>>> perLeafResults, int k) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should move have a List<FirstPassResults>
where
class FirstPassResults {
private LeafReaderContext leafReaderContext;
private Map<Integer, Float> docToScoreMap;
}
this will help in future to abstract more details related to search for a segment.
Feel free to have a better name for classes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would PerLeafResult
work as a class name? It feels more generic and versatile compared to including FirstPass
. Perhaps firstPassResults
could be used as a variable name instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
* @throws IOException | ||
*/ | ||
public boolean isExactSearchPreferred(LeafReaderContext context) throws IOException { | ||
final BitSet filterBitSet = getFilteredDocsBitSet(context); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For each ANN leaf, getting the filter Bitset is now happening twice. Once for this check, and then for getting the actual filter bitset.
Considering worst case scenario, This goes through filterWeight.scorer twice and then creating a bitset which involves a linear loop
How confident are we that this won't impact latencies for filtering cases?
Can we avoid this duplicate? One way is to do only engine files empty check here. and then pass the bitset in searchLeaf and in exact Search context
Let me know if I am missing something
}); | ||
} | ||
return indexSearcher.getTaskExecutor().invokeAll(rescoreTasks); | ||
} | ||
|
||
private List<Map.Entry<LeafReaderContext, Map<Integer, Float>>> doExactSearch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doRescore and this method are more or less similar, any possiblity that we can reuse something here?
Another option could be to read the quantized values directly from the native engine file, as discussed here: #2266. This approach would also address cases where the search results are fewer than Or, we could also store quantized value in lucene segment file and read from it directly so that we can avoid quantization during exact search. |
Description
For exact search, it is not required to perform
qunatization during rescore with oversamples.
However, to avoid normalization between segments from
approx search and exact search, we will first identify
segments that needs approxsearch and will perform oversamples
and, at end, after rescore, we will add scores from segments that
will perform exact search.
Related Issues
#2215
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.