From 2f9f403a6cfdaac3de12469d2c60c777aee12a27 Mon Sep 17 00:00:00 2001 From: wrigleyDan Date: Thu, 12 Dec 2024 17:45:30 +0100 Subject: [PATCH 01/18] Adding draft for optimizing hybrid search blog post. Signed-off-by: wrigleyDan --- .../2024-12-xx-hybrid-search-optimization.md | 343 ++++++++++++++++++ .../1_search_config_comparison.png | Bin 0 -> 39410 bytes ...andom_forest_best_feature_combinations.png | Bin 0 -> 53752 bytes ...linear_model_best_feature_combinations.png | Bin 0 -> 53216 bytes 4 files changed, 343 insertions(+) create mode 100644 _posts/2024-12-xx-hybrid-search-optimization.md create mode 100644 assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/1_search_config_comparison.png create mode 100644 assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/2_random_forest_best_feature_combinations.png create mode 100644 assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/3_linear_model_best_feature_combinations.png diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md new file mode 100644 index 000000000..5cdfa7988 --- /dev/null +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -0,0 +1,343 @@ +--- +layout: post +title: "Optimizing Hybrid Search in OpenSearch" +authors: + - dwrigley +date: 2024-12-xx +categories: + - technical-posts + - community +meta_keywords: hybrid query, hybrid search, neural query, keyword search, search relevancy, search result quality optimization +meta_description: Tackle the optimization of hybrid search in a systematic way and train models that dynamically predict the best way to run hybrid search in your search application. +--- + +# Introduction + +[Hybrid search combines keyword and neural search to improve search relevance](https://opensearch.org/docs/latest/search-plugins/hybrid-search) and this combination shows promising results across industries and [in benchmarks](https://opensearch.org/blog/semantic-science-benchmarks/). + +As of [OpenSearch 2.18 hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is linearly combining keyword search (e.g. match queries) with neural search (transforming queries to vector embeddings by using machine learning models). This combination is configured in a search pipeline. It defines the post processing of the result sets for keyword and neural search by normalizing the scores of each and then combining them with one of currently three available techniques (arithmetic, harmonic or geometric mean). + +This search pipeline configuration lets OpenSearch users define how to normalize the scores and how to weigh the result sets. + +# Finding the right hybrid search configuration is hard + +As an OpenSearch user this leads to the ultimate question: which parameter set is the best for me and my application(s)? Or more concretely: what best normalization technique should I use and how much neural/keyword is ideal? + +Unfortunately, there exists no one-size-fits-all solution. If there was the one best configuration there wouldn’t be a need to provide any options, right? The best configuration depends on a plethora of factors related to any given search application’s data, users, or domain. + +However, there is a systematic way to arrive at this ideal set of parameters and even go beyond that. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set *globally* for all incoming queries. We will cover this approach first before moving on to a dynamic approach that identifies hybrid query parameters individually per query. + +# Global hybrid search optimizer + +To identify the best hybrid search configuration we treat this as a parameter optimization challenge. We know the values parameters can have, so we know what combinations exist: + +* There are two [normalization techniques: l2 and min_max](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) +* There are three combination techniques: arithmetic mean, harmonic mean, geometric mean +* The keyword and neural search weights are values in the range from 0 to 1. + + +With this knowledge we can define a collection of parameter combinations to try out and compare to each other. To follow this path we need three things: + +1. Query set: a collection of queries. +2. Judgments: a collection of ratings that tell how relevant a result for a given query is. +3. Search Metrics: a numeric expression of how well the search system does in returning relevant documents for queries + +## Query set + +A query set is a collection of queries. Ideally, query sets contain a representative set of queries. Representative means that different query classes are included in this query set: + +* Very frequent queries (head queries), but also queries that are used rarely (tail queries) +* Queries that are important to the business +* Queries that express different user intent classes (e.g. searching for a product category, searching for product category + color, searching for a brand) +* Other classes depending on the individual search application + +These different queries are best sourced from a query log that captures all queries your users send to your system. One way of sampling these efficiently is [Probability-Proportional-to-Size Sampling](https://opensourceconnections.com/blog/2022/10/13/how-to-succeed-with-explicit-relevance-evaluation-using-probability-proportional-to-size-sampling/) (PPTSS). This method can generate a frequency weighted sample. + +We will run each query in the query set against a baseline first to see how our search result quality is at the beginning of this experimentation phase. + +## Judgments + +Once a query set is available judgments come next. A judgment describes how relevant a particular document is for a given query. A judgment consists of three parts: the query, the document, and a (typically) numerical rating. + +Ratings can be binary (0 or 1, i.e. irrelevant or relevant) or graded (e.g. 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, there are human raters going through query-document pairs and assigning these ratings according to some rules. On the other hand there are implicit judgments. Implicit judgments are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple clickthrough rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. + +Recently, a third category of generating judgments emerged: LLM-as-a-judge. Here you use large language models like GPT-4o to judge query-doc pairs. + +All three categories have different strengths and weaknesses. Whichever you choose, make sure you have a decent amount of judgments. Twice the depth of your default search result page per query is usually a good starting point for explicit judgments. So if you show your users 24 results per result page you should rate the first 48 results for each query. + +Implicit judgments have the advantage of scale: when already collecting user events (like queries, viewed documents and clicked documents) this is an enabling step for calculating 1,000s of judgments by modeling these events into judgments. + +## Search metrics + +With a query set and the corresponding judgments we can calculate search metrics. Widely used [search metrics are Precision, DCG or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). + +Search metrics provide a way of measuring the search result quality of a search system numerically. We calculate search metrics for each configuration and this enables us to compare them objectively against each other. As a result we know which configuration scored best. + +If you’re looking for guidance and support to generate a query set, create implicit judgments based on user behavior signals or calculate metrics based on these, feel free to [check out the search result quality evaluation framework](https://github.com/o19s/opensearch-search-quality-evaluation/). + +## Create a baseline with the ESCI Dataset + +Let’s put all pieces together and calculate search metrics for one particular example: in the [hybrid search optimizer repository](https://github.com/o19s/opensearch-hybrid-search-optimization/) we use the [ESCI dataset](https://github.com/amazon-science/esci-data) and in [notebooks 1-3](https://github.com/o19s/opensearch-hybrid-search-optimization/tree/main/notebooks) we configure OpenSearch to run hybrid queries, index the products of the ESCI dataset, create a query set and execute each of the queries in a keyword search setting that we assume to be our baseline. The search metrics can be calculated as the ESCI dataset comes not only with products and queries but also with judgments. + +We chose a `multi_match` query of the type `best_fields` as our baseline. We search in the different fields of the dataset with “best guess” fields weights. In a real-world scenario we recommend techniques like learning to boost based on Bayesian optimization to figure out the best field and field weight combination. + +``` +{ + "_source": { + "excludes": [ + "title_embedding" + ] + }, + "query": { + "multi_match" : { + "type": "best_fields", + "fields": [ + "product_id^100", + "product_bullet_point^3", + "product_color^2", + "product_brand^5", + "product_description", + "product_title^10" + ], + "operator": "and", + "query": query[2] + } + } +} +``` + +To arrive at a query set we went with two random samples: a small one containing 250 queries, and a large one containing 5,000 queries. Unfortunately, the ESCI dataset does not have any information about the frequency of queries, which excludes frequency weighted approaches like the above mentioned PPTSS. + +These are the results running the test set of both query sets independently: + +| Metric | Baseline BM25 - Small | Baseline BM25 - Large | +| :---: | :---: | :---: | +| DCG@10 | 9.65 | 8.82 | +| NDCG@10 | 0.24 | 0.23 | +| Precision@10 | 0.27 | 0.24 | + +We applied an 80/20 split on the query sets to have a training and test dataset for the upcoming optimization steps. For the baseline we used the test set to calculate the search metrics. Every optimization step uses the 80% training part of the query and the 20% test part for calculating and comparing the search metrics. + +These numbers are now the starting point for our optimization journey. We want to maximize these metrics and see how far we get when looking for the best global hybrid search configuration in the next step. + +## Identifying the best hybrid search configuration + +With that starting point we can set off to explore the parameter space that hybrid search offers us. Our global hybrid search optimization notebook tries out 66 parameter combinations for hybrid search with the following set: + +* Normalization technique: [l2, min_max] +* Combination technique: [arithmetic_mean, harmonic_mean, geometric_mean] +* Keyword search weight: [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] +* Neural search weight: [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0] + + +Neural and keyword search weights always add up to 1.0, so a keyword search weight of 0.1 automatically comes with a neural search weight of 0.9, a keyword search weight of 0.2 comes with a neural search weight of 0.8, etc. + +This leaves us with 66 combinations to test: 2 normalization techniques * 3 combination techniques * 11 keyword/neural search weight combinations. + +For each of these combinations we run the queries of the training set. To do so we use OpenSearch’s [temporary search pipeline capability](https://opensearch.org/docs/latest/search-plugins/search-pipelines/using-search-pipeline/#using-a-temporary-search-pipeline-for-a-request) that saves us from pre-creating all pipelines for the 66 parameter combinations. + +Here is a template of the temporary search pipelines we use for our hybrid search queries: + +``` +"search_pipeline": { + "request_processors": [ + { + "neural_query_enricher" : { + "description": "one of many search pipelines for experimentation", + "default_model_id": model_id, + "neural_field_default_id": { + "title_embeddings": model_id + } + } + } + ], + "phase_results_processors": [ + { + "normalization-processor": { + "normalization": { + "technique": norm + }, + "combination": { + "technique": combi, + "parameters": { + "weights": [ + keywordness, + neuralness + ] + } + } + } + } + ] +} +``` + +norm is the variable for the normalization technique, combi the variable for the combination technique, keywordness is the keyword search weight and neuralness is the neural search weight. + +The neural part of the hybrid query is searching in a field with embeddings that were created based on the title of a product with the model all-MiniLM-L6-v2: + +``` +{ + "neural": { + "title_embedding": { + "query_text": query[2], + "k": 100 + } + } +} +``` + +Using the queries of the training dataset and retrieving the results we calculate the three search metrics DCG@10, NDCG@10 and Precision@10,. For the small dataset there is one pipeline configuration that scores best for all three metrics. The pipeline uses the l2 norm, arithmetic mean, a keyword search weight of 0.4 and a neural search weight of 0.6. + +The following metrics are calculated: + +* DCG: 9.99 +* NDCG: 0.26 +* Precision: 0.29 + +Applying the potentially best hybrid search parameter combination to the test set and calculating the metrics for these queries results in the following numbers: + +| Metric | Baseline BM25 - Small | Global Hybrid Search Optimizer - Small | Baseline BM25 - Large | Global Hybrid Search Optimizer - Large | +| :---: | :---: | :---: | :---: | :---: | +| DCG@10 | 9.65 | 9.99 | 8.82 | 9.30 | +| NDCG@10 | 0.24 | 0.26 | 0.23 | 0.25 | +| Precision@10 | 0.27 | 0.29 | 0.24 | 0.27 | + +Looking at these numbers we can see improvements across all metrics for both datasets. To recap, at this point we did the following steps: + +* Create a query set by randomly sampling +* Generate judgments (to be precise, we only used the existing judgments of the ESCI dataset) +* Calculate search metrics for a baseline +* Try out several hybrid search combinations +* Compare search metrics + +Two things are important to note: + +* While the systematic approach can be transferred to other applications, the experiment results cannot! It is necessary to always evaluate and experiment with your own data. +* The ESCI dataset does not have 100% coverage of the judgments. On average we saw roughly 35% judgment coverage among the top 10 retrieved results per query. This leaves us with some uncertainty. + +The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above mentioned parameter values. But of course there are queries that are winners and queries that are losers when doing this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average not every query will profit from the configuration. + +The following chart shows the DCG@10 values of the training queries of the small query set. The x-axis represents the search pipeline with l2 norm, arithmetic mean, 0.1 keyword search weight and 0.9 neural search weight (configuration A). The y-axis represents the search pipeline with identical normalization and combination technique but switched weights: 0.9 keyword search weight, 0.1 neural search weight (configuration B). + +Scatter Plot of DCG values for Keyword-heavy search configuration and Neural-heavy search configuration{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} + +The clearest winners of configuration B are those that are located on the y-axis: they have a DCG score of 0 for this configuration. And for configuration A some even score above 15. + +As we strive for having winners only this now leads us to the question: improvements on average are fine but how can we tackle this even more targeted and come up with an approach that provides us the best configuration per-query instead of one good configuration for all queries? + +# Dynamic hybrid search optimizer + +We call this approach to identify a suitable configuration individually per hybrid search query *dynamic hybrid search optimization*. To move in that direction we treat hybrid search as a query understanding challenge: by understanding certain features of the query we develop an approach to predict the “neuralness” of a query. “Neuralness” is used as the term describing the neural search weight for the hybrid search queries. + +You may ask: why predict only the “neuralness” and none of the other parameter values? The results of the global hybrid search optimizer (large query set) showed us that the majority of search configurations share two parameter values: the l2 normalization technique and the arithmetic mean as the combination technique. + +Looking at the top 5 configurations per search metric (DCG@10, NDCG@10 and Precision@10) only five out of the 15 pipelines have min_max as an alternative normalization technique and none of these configurations has another combination technique. + +With that knowledge we assume the l2 normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. + +That leaves us with the parameter values for the neural search weight and the keyword search weight. By predicting one we can calculate the other by subtracting the prediction from 1: by predicting the “neuralness” we can calculate the “keywordness” by 1 - “neuralness”. + +To validate our hypothesis that we came up with a couple of feature groups and features within these groups. Afterwards we trained machine learning models to predict an expected NDCG value for a given “neuralness” of a query. + +## Feature groups and features + +We divide the features into three groups: query features, keyword search result features and neural search result features: + +* Query features: these features describe the user query string. +* Keyword search result features: these features describe the results that the user query retrieves when executed as a keyword search. +* Neural search result features: these features describe the results that the user query retrieves as a neural search. + +### Query features + +* Number of terms: how many terms does the user query have? +* Query length: how long is the user query (measured in characters)? +* Contains number: does the query contain one or more numbers? +* Contains special character: does the query contain one or more special characters (non-alphanumeric characters)? + +### Keyword search result features + +* Number of results: the number of results for the keyword query. +* Maximum title score: the maximum score of the titles of the retrieved top 10 documents. The scores are BM25 scores calculated individually per result set. That means that the BM25 score is not calculated on the whole index but only on the retrieved subset for the query, making the scores more comparable to each other and less prone to outliers that could result from high IDF values for very rare query terms. +* Sum of title scores: the sum of the title scores of the top 10 documents, again calculated per-result set. We use the sum of the scores (and no average value) as an aggregate to have a measure of how relevant all retrieved top 10 titles are. BM25 scores are not normalized so using the sum instead of the average seemed reasonable. + +### Neural search result features + +* Maximum semantic score: the maximum semantic score of the retrieved top 10 documents. This is the score we receive for a neural query based on the query’s similarity to the title. +* Average semantic score: By contrast to BM 25 scores the semantic scores are normalized and in the range of 0 to 1. Using the average score seems more reasonable than going for the sum here. + +## Feature engineering + +As training data we used the output of the global hybrid search optimizer. As part of this process we ran every query 66 times: once per hybrid search configuration. For each query we calculated the search metrics and as a result we know per query which pipeline worked best, thus also which “neuralness” (neural search weight) worked best. We used the best NDCG@10 value per query as the metric deciding what the ideal “neuralness” was. + +That leaves us with 250 queries (small query set) or 5,000 queries (large query set) together with their “neuralness” values for which they achieved best NDCG@10 values. Next, we engineered the nine features for each query. This constitutes the training and test data. + +## Model training and evaluation + +With the appropriate data at hand we explored different algorithms and experimented with different model fitting settings to identify patterns and evaluate if we’re on the right track with that approach. +We went for two relatively simple algorithms: linear regression and random forest regression. +We applied cross validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. + +**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (i.e. when comparing the RMSE scores within one cross validation run for one feature combination). + +**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 vs. 0.22 for the best linear regression model (large dataset) - both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. + +**Feature combinations of all groups have the lowest RMSE**: the lowest error scores can be achieved when combining features from all three feature groups (query, keyword search result, neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with keyword search result feature combinations only serves as the best alternative. + +This is particularly interesting when thinking about productionizing this: putting an approach like this in production means that features need to be calculated per query during query time. Getting keyword search result features and neural search result features requires running these queries which would add significant latency to the overall query even prior to inference time. + +The following picture shows the distribution of RMSE scores within one cross validation run when fitting random forest regression models with feature combinations within one group (blue: neural search features, red: keyword result features, green: query features) and across the groups (purple: features from all groups). The feature mix (purple) scores lowest (best), followed by training on keyword search result features only (red). +Box plot showing the distribution of RMSE scores within one cross validation run when fitting the random forest regression model{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} + +The overall picture does not change when looking at the numbers for the linear model: +Box plot showing the distribution of RMSE scores within one cross validation run when fitting the linear regression model + +## Model testing + +Let’s look how the trained models perform when applying them dynamically on our test set. +For each query of the test set we engineer the features and let the model make the inference for the “neuralness” values between 0.0 and 1.0, since “neuralness” also is a feature that we pass into the model. We then take the neuralness value that resulted in the highest prediction which is the best NDCG value. By knowing the “neuralness” we can calculate the “keywordness” by subtracting the “neuralness” from 1. + +We again use the l2 norm and arithmetic mean as our hybrid search normalization and combination parameter values as they scored best in the global hybrid search optimizer experiment. With that we build the hybrid query, execute it, retrieve the results and calculate the search metrics like in the baseline and global hybrid search optimizer. + +Metrics for the small dataset: + +| Metric | Baseline BM25 | Global Hybrid Search Optimizer | Dynamic Hybrid Search Optimizer - Linear Model | Dynamic Hybrid Search Optimizer - Random Forest Model | +| :---: | :---: | :---: | :---: | :---: | +| DCG@10 | 9.65 | 9.99 | 10.92 | 10.92 | +| NDCG@10 | 0.24 | 0.26 | 0.28 | 0.28 | +| Precision@10 | 0.27 | 0.29 | 0.32 | 0.32 | + +Metrics for the large dataset: + +| Metric | Baseline BM25 | Global Hybrid Search Optimizer | Dynamic Hybrid Search Optimizer - Linear Model | Dynamic Hybrid Search Optimizer - Random Forest Model | +| :---: | :---: | :---: | :---: | :---: | +| DCG@10 | 8.82 | 9.30 | 10.13 | 10.13 | +| NDCG@10 | 0.23 | 0.25 | 0.27 | 0.27 | +| Precision@10 | 0.24 | 0.27 | 0.29 | 0.29 | + +Looking at these numbers shows us a steady positive trend starting from the baseline going all the way to the dynamic predictions of keywordness and neuralness per query. The large dataset shows a DCG increase of 8.9% rising from 9.3 to 10.13, the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 7.4%for the large dataset, 10.3% for the small dataset, Precision shows an improvement of 8% for the large dataset and 7.7% for the small dataset. + +Interestingly, both models score exactly equally. The reason for this is that while they both predict different NDCG values, they predict the best ones with the same “neuralness” as an input feature. So while the models may differ in RMSE scores during the evaluation phase they provide equal results when applied to the test set. + +Despite the low judgement coverage we see improvements for all metrics. This gives us confidence that this approach can provide value for search systems not only switching from keyword to hybrid search but also those who already are in production but have never used any systematic process to evaluate and identify the best settings. + +# Conclusion + +We provide a systematic approach to optimizing hybrid search in OpenSearch based on its current state and capabilities (normalization & combination techniques). The results look promising especially given the low judgment coverage that the ESCI dataset has. + +We encourage everyone to adopt the approach and explore its usefulness in their domain with their dataset. We are looking forward to hearing about the experimentation results the community has with the provided approach. + +# Future work + +The currently planned next steps include replicating the approach with a dataset that has a higher judgment coverage and covers a different domain to see its generalizability. + +Optimizing hybrid search typically is not the first step in search result quality optimization. Optimizing keyword search results first is especially important as the keyword search query is part of the hybrid search query. Bayesian optimization is an efficient technique to efficiently identify the best set of fields and field weights, sometimes also referred to as learning to boost. + +The straight forward approach of trying out 66 different combinations can be created more elegantly by applying a technique like Bayesian optimization as well. In particular for large search indexes and a large amount of queries we expect this to result in a performance improvement. + +Reciprocal rank fusion is another way of combining keyword search and neural search, currently under active development: + +* [https://github.com/opensearch-project/neural-search/issues/865](https://github.com/opensearch-project/neural-search/issues/865) +* [https://github.com/opensearch-project/neural-search/issues/659](https://github.com/opensearch-project/neural-search/issues/659) + +We also plan to include this technique, as well to identify the best way of running hybrid search dynamically per query. diff --git a/assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/1_search_config_comparison.png b/assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/1_search_config_comparison.png new file mode 100644 index 0000000000000000000000000000000000000000..3a6bd77a954ab132e9e512f5f15d3b00f70a2415 GIT binary patch literal 39410 zcmeFZbySpX*FH=NNP~)WjSQiLGz_7VA`Bqi(%m3kQUijdNDZlgBB^u=NGTyL-7O_u z-#K{SzxR2c=YD_R_s{p=yB2H7VlJ=Q*Lj|M?_(eP*cU=sNtW)MY-r*TQBr*Q+hY`;* z0h&hnkDo!51<n06&JFr*RauZtOq( zzp7xgMT7#`cW(IrEV=*qDJ0E=m41!-_@AGEzvw*-*loRoFn{xp&oYdwjMSR{s_~B> zaxzMK&c#XbKKX~oWI(sUVucqvnQ;8$he;%vFemwJN|t|E1k|%8K!>YJZ~o(l3c#c) zNIyaRKRl^i6BSr`n~BkgfBf(cFi9`YGF1MbHq|H!tRcU>t@J;AI70?Z;*{rWf&9~+ zmdJoLq!O%P`lk;a@qkH5kd$X}|Fm;j%##1RtzcmS{&!pdISi7ZHcjzN-fvX3-m*xi?Q$~&zdV7w>o zC`yUIJZ@L^_@-=UNkfcRl_IGVfwNjbw;`srVogMV#ClAgub)GfB?m%$YydQcB?#8f zpQ)j7=XNM!GXWaM8}wbi$pD1YgI!Yxh!GS-M^URXF0ztWApZK^9eZjF(p79XbP+)# z(M2xrXRCw(Q5HvKR1z}fBydel^=!w4B8a$?#}>cM$ic;_vGgdg{xKtSAj77X< zG(4e1aOqRtb|CRhy9>Rl-^hSYd8 zyaa`S-C*+I|Gcx?3M77t`l(aV<&wqO^W%U|qD zMbm@(O9(*G!6maa4Cr~_UnlD&p6#@PD-KXbwh$iwz%kkPl#e`@#o- zVv5&G#^IvuAC(W_fN$h>Hpqd^wU2>%dQ$D~DJ~3R#la4CIevX#$9DgP#je>OqdoMp z=h5hg4&n#2Cem ztIfl9DH-}YRDGL|>3^o#Z*Adp`}J+l%F^AS<9Muc?!|I)u)`=O#?EGvhk`R?<5Ion zPufSOS4a#yB0ebEkc$=XJQy(%B@)f3aIMDK`p$WBtm%WOD%x5t-{axgirkeV5=I@> z_U?G6d9jXh`lAL+6*aACfNjV zHZpP+5zo+w#Zp|25&Aq4HEvG`>e3rO$te+n)1?dcd^9LFJw{Ru7g7dG7SL~hKh&)7 z%j@(Jmt__^)Qjy?u{tYC(!d~aM47bozFrt z6QH-oWf>$!Y_k!JOJw7o8-SO8V9 z011zAa2{cxqZZ(-?qQ2AFJ_y}01L~>r0G@?$5Vp?qi*`Z#;}PWBVp!>o7$6K*w&SR zlazLQH>=DfBjllL&e&fBjt}NE)wm%jSI~c9|I*)AcOfsV>R*uRXrA&R@SNo~fuJTLs#U zZ;s8~p>qGlE@HP_=zh)F=Bqn_yTtcp%y-}AzPwWm=;e-0e>?lG!9eYU^Mx4bI!_@C_JE!O8|P7V1E7p{z|s&R|im06z$7V+TY zrO(Jp@X5lNKp}*{c?K`CM9KHSp zW#SOh3HQF_!8`ZXk6s(Pzx^%JVIGM-V$!5Ixq!NT{GZ!npl*L3b$d+nLkPnIaC;28 z)Hul-AZkgGa@M-@p>F@14GOP0q>oy!b(3i{NWzgiXe}Qov7e{+Se-tc9V;l`KGsy= zu~}nw?jP^_*@;?-QEv26Y&@xCv_<&@^m1>q7zRmCCiGW|@(dB+FlFN=>5Ha-*sOPw z5D(S&ijo|`|7r5OBo%|nduLzx&2FQx)>*&lH)JjDrO%1_t>-NC&75cK~_Ct=RJroZR8=IY@MUw>23wOW0rdN{(rA>na*Wa_{_KRq?-QcUz3 zm*OefW%>@!tp-sQPA_U5QGI0i&${l{>!TGzF9vTA?o;o6qNRR^UCV$zj`G0JINI2N zO<@1VgSCAO(dW0)E8?{ip_`+R_|FUu==&UxVh}0(fw{Tc<6~}lLQYm&;)KFu@9gq~ zRgj~J>D#BmI=CA@^<94|dA;2p-{~0zXCij~V;XF%)P}x@e*5{xchT@dYs!3Na^QYy z#Yv9Od^Jwr(kGVN2TfRSLJ^2pL`jiTp~Ff^P>jGL^v`AN&D>9Hr!|c*Z@NFgfEZDL z&8n3vqbDoL0BZ{yFq5!6+GZw>Al4}~?3J1p7thV>KKxGC1$A7&<+iKAX>=Gq6O3@V zec}3+c=P91C8K(=ffpGXH^dAF0-~31SIymjB;u9;{ejE8X7tNKq~Ss)%+qJ>la*ce zGQaqWm((PyTY0!N%}|4RRA^2J3za~?C9XMk4@c}Ah-x9kU7h5lJy|Ws9^O3|g?g2E zve7p(4K`C&oZ5~p`*V?cpHwUrJ;|{1H{!75X0U+mO4t4X6+QdwuQ1>*6@#=q=lqw+ zE@d%$)-7iu6c!CL7iE%;K1!q(&v%CeI#jZ;HFc00a-n+Vtw96(20P z2XzFeETeG_YI~IfJUUhXPY&2hT6?qiVCKwrvP)`f-KX>gbqG`i+3`Rf`1MpRePiKN zxW+vyCQ*((DOdc)Q_3-Ib;f9Rjn!vDYkU<6I-3elNy}$;iMqqnVvv{G*ylh6Os=%8+2~mh1ckt&1{q}7}T)+ zvGMNZgO#F`=*PD>wO3sN=D)=Y98G zg1Xb4hdxJ*&{&nlg_Q45Xnmk?2IrTv=g0?f&?8Ltux2ltF7ow_QLwg|#=y>-ymPb= z5-ya*3nS<6=7%n^8(|ivZ>k)yKo*{i&G5(VxEZ3&!s1E>6VtKOLq&w%7 zJ^WqlQN3LJgn*Q%s?)`0%_H1U3l^3KgMDlbaj>QmMsc;sz~FL@=pX07agjs!6(>Yt z0vWF?+oVD6)c{u3Us~!p?Tl5e_l{sm9gJ^{JGwQV?(`^JwiYO^cmJ!Qz`P!GGDi}W znWo1CGs$>#;R6Pb4g71qG*w%O9#Y!~vzOY{UkDOPTxOrb#|v~RR{EyGSLNA6-7yrB zRxbwJt(V?_KYcPA!e5lPUG8sy5k@9ij#htI`9!Ot#b0S|+ir9?;J($ry+l9^GSixN zcs)|g$8zfK$%nFy=(nc+UVWNb^^a-Cz|~5)|8w7 zItTtLvOiCj;%!7*M5!vWLs7DV1)Yo)1T{PhT)cb$_@yQ%98G_*03iSCjMu9x-ts&8 zokjUhBf^&8A{_dhTlW}Z4-`Df*m$Pk7!VRbB*n{F?ii4T-vhsm?k)|;G3bF4HybzS zYtP)22z92^eE27+52-Na@t@jIqK^luAu`(sW zlLI*(J-#c(7Jy%-Fcmz{`>x+SzMXn`YQ^475yMNSOl+wUyd|iDYz?}f@c`o&s_2-S z;C0>ynKeIyIl3<${z@`Yw#OGmTy9>+e)FscS(JzGzGmhDa=z7{`bZc;4 zdT%VO=BxAK$J5aVnua_sdHO#tNe{!!KDy2D+Gs7sMNUL2ACH}IfADTA|ssKXA5YcpH+EjGak$I~}g<9Q}H+tn$|RPIbPWt)izf^AmT#=#fZ) zL>nLUcXg#kKL+RE^Er>gsXas+;}6&qIq7@%S^HNBG5xg$-E5O}4F+CJ%Ym;_FHROK zRgxbys~GLxAZ}GDY10IL`>SdQM}JpNtBX95x3{z{9-KxR-^B&_Bm-v0W?{SSK0^n> zaND*u!WG^X#%8adYdNgO$imeKPQZzvN9ZFSjE?pgJH%t#@aazT^7LH1;6+GH-a^O!5DakcC}d@Eo@lhU zyS#Y&pz;Gjn(u2;=Um`o9!~F}E4?B}lbNpWN0ln93o(z5SIB9neJH-oMsyRnifE#2 z87lAHX^TXyhv(u#XOWoa3hxo)#^`6_)x;ltnf9C-MUxA!oI5^_%oSaTEihG71JC|0 zIYtXAYczEG@!4PobT`z&N~KPeHJ3*INhmKCI}7d+{N7}SQq8qRaKPuJk-cubs0ZHQ zfWoDOjNf(^zPdG+G(=q@xW!<-JlrIm{&ViE9Im!hQk^!5g0>akmuea7H z`myu%PjC~cz zF;WoCo4CJ)M^v_R#Qjzyj;71#?208e(PGCotT6Gfv2ps?%HS@uwo}RB2^po&M#P=+m z=M>h9@~gGxen)rcJgnLM&d$s(*WzE`F~Q%{BtgRfRuu0&LL&)*ozKxd~i6`r9PdV~Uqz zj>;{Ek^HvfSy^815WM3l`A<`S3`~}{ruVyyN9#=`#y~#|pGU9%Iuey3tR@vSe4gxI z@T+Iiv+C%?dwXdCXQw6Nv&Y<+)$NJm+C~<{WYWm;3#&IP!&`oryfto{;R_3pOOxO} ztNvU%C7UUdQ)0R=zL#k)7a+v%#KpQUmkT|Op9_rA1BaGJ!8hVR@XgJoCAL68GW@06 zdO%JH6_o(D|#-|xy=;WI`ug>YaU|8~g|HGHw&t&=#0u*|@H;6A5dsn=g zwclJ!!Or$pHFpMm2yP398sSD|o;ffA?q=!F~Pc@^CK`^K17 zRtI7^rtoz^OOJC!;<`JJH}suL1A8cFBp!{vYga{%V#Q5DXi)JWft?W^Tt$Lm()?|w zqo?b)tm~)k+duUM(B?0kC|X~qPhXBAI2@=D#FvM;jYN;{76;{ojDCg^0~n@qul#XS zPW3M2b@EKPT$3+&_K74HnGICw(9?NSmMWzwve=`<)5U1=RWvQ3o@YV*{hXuKV#Ph9 z&6=j=J9%g(GG1BZN0Y1o`@rymQPEY9Hz5Yvrt4ah_MF9j&67>3^J8GrJ6|-FwEdis z45@RG>d_isLC*TwVh_7H%TJ>9)p~;h74KlBq%5_d(}oMO=NI`0fYv10HNA{A(JGs0 z{rS13(a|^o|D^D z*;UDdT5#$`-FdULjx51mmKXo2bqq*Cg404w{GeX$SbTuz$@Gxbz*qHt+U#?^us}h> z%}coPVrO0P>)D?iT$1MMZWqJ309yvwD*h{39S{TE@||mgW?X9^aJRIWS+nmyUNX?9 z>ZcwH3K410%4~T~V->k{?L^Okp&%bd%N2=rJsRUxyB5-M5z%H^N{KfLaElaC?)_b< z;8NcagS)3z_o8pzI>G-G@$ln&!vyGii|~17bPE&-?S3J2(#pcH&^ev4ZG89(0KC`N zU((%7clbq~YU$|eePI2DyOaErNcOn;wI1=vpvUgwtzP3qHpmY1mz(mE(WtB~@pb{B z$GT$$>RI|4<+x)-)an?7AGGYD{(V)gUpvWb1fonTH8LtC}jj+R-z*A;oIC> z>kv#qNm)uM%Oqj8$WH6<4VN>t|A2s8%sUt@qk}t}M7I$CYXphMPo*1Vot*LvPr$i( z7Q{X;APcBuzS4#Ep$n28SUqJNY|R$pUIQcbxAJ3oUwB!>%zx8z_*=MjqFeQ9*D%Cg zBje8BS^yH^z)jHr!RZ5|Eo8PsDDV3>D{kq5p@bXby0pjnUmPBIl8JLn$zznF7UhV| zpwNK4Y8R5i&umOrCky=X#0|2RwQKGcsvqv<_Y;8*?3?HlU7R>bK8|=8Bg`LEJc>eQkMsEsW7{(r; zFm|j$A@f3Ksfod-`rkzUxsm4ZhvwJ7LccvXUDx6o`D8P(5r*~U&i2|mU7_yQ1O+v4 zb>8o%atsGp!7%Om+1m!*PhAQ#qr^UUAXoOkG}`5v#;WX;Ntl*U@VhXrD=gOL)onDt zmcY0ys(D*X{5?g7oYk0rFxcWnWI1xAzX>j3Bjl04qOHyUWWuqY__MKhjM?B*upVx$ zJ^_zF`t7*WL%Q3#%zKu%6@HV||CbB)57norO6zmHCbBiBt0rm%lv}~lGo{dYhKRYu zx>l|!;^@*tt=5ujCbIzO;rEROV;M_j9_7#FgoK(@Uc0VlJk&DfOXXaq$oiXO7;!hk zj&7Q3xWq0jbS;^s$JQoW$<9WVQRXRgc6<2KU>0$Tx*8dCn;#XMO?yv+IMyQ6zsf3{ zr;ipUBXsSb6y8%#fRZXcnA%_h2gSM5Js1{?&r|a;{Vse?#Z0Vb%xDWj^2F#2LVQ3o zUH0@WPl0Q%NVXH+6WEe`w-aM3771(YV}2dA-UzKx*MYyU9sKDmUen&;&G* zO-Y8UN45q=r4OLA%#S^@&u+}4=|5>&-OVjbIzX>HRxY-mE8I}_jaZ^UgGVU_6(>Ne z^UDJ{(?22#%PKY@o+N5BLZmjoL2waY7jE!CHM*FKU&2UrT^bA<$mn}ciBc+PHhOPYdWr!t3 z#oRnhh1b?c9z{4I?Gaw@4SQP^1y7XO?!_T7w+Pj_54IVNR(G4yo?j6q`ESmc2L2+%Jey5f&p1eKyxOi-HHV(F+tt3o+FFjU$WBq~jAm zS9hgm`mTC`5?h<6*KJu^e*p!GDQdO*OmxY@IZ;MSPp=($*q6XohX%q1bX^wZ`OJGy zU0m;KyT?`M^YCsHWi(E`WR0$MvFNuFO;*ooF)WnRuFz`_B@AfKIfiT74v6e1O=Q6k zUpLG$5Gb_A8Tq&)p59;oRl{?!m$EXG(Kbaw^g0E+19jr`EOo!54U2pZoA>fntK+BvlgN}t$tj#GD231+)Q)CO=;PS?!5y67-b5@1hoR$-~HnNv$_>C%X?oP47 znJm8E`h_BklPvBmbb|fe?v4%FQo75V;(JOwcSiS!XR{i|@vD{MKk^#^1B(dBdJ|}g zYci@6U>=Pi+Ei34zLl;{O!U}JEwtuO@S5Ufe|lSh9aAwti49K1fFNLGM}d+Okmqmt z9g<&rMHPn4+s-V?!D}kjty$Ri{kcSh_EY)?dtcoa|A|(SWa=QtnROpj#=kxz(NH7v z8u4k~O#uW=ceY#@g4Ra7QyX>v-M7K8G5>Aun#o;32oyTIw9YAAF5{*mtZ)0B$7x$x z+)ullw7Bk=w8w6QjGT`0)~AoMM1gvMCKLAiD?x%<@BdU{sUFd`2RI_PxWu$aau@I% z7KdPn=@*AahnDiE7PIkO4aAhO`5{c#znYt?y^b7R z8cu);V46J#wGuioQHW7aQXfNO0nv-*zKNl#%4ez$FWltucDK=Ll;49E%0D9Kw}xo= z%zkSvt62KoI2Ohd73mQyVh2rY@Pr?MJAAQd6P=Vs9iHi~j`%rlBHEtzE&q+Cz(=w% zX$irg7op6NC-TFr$2#6b&EDp(#)c#YsM8>*B36nWK3q7&y(dN-O~V#Z@K>{vT=M?e zc6_MzmM24W7)4SM=7{TImL2tTyo53(?#He30Ym4eAmFiEgqcL5N;u&YKrK}}8c(O} zLJqziOOZK}gTD%zrIw0BN#^dFPIwUS9#DMz#@e7$dB6X(hOB*3i15qKPQ+7h_bt4a zigH=c9;MOiLfEt~UVv}Kp$OS@PYf*YWxte4*P>CUk-w|dkhI&{IW|9wzz}z+un7d2gWHg1jI4 z_R-Vk_CbwqKBp~53;b&>XmnnDtF7*>V_V7xqdviNYKyZmog zj+TRb2&cJ5O1vvWH*Co*GOV&?-21!I?kMp4z(!ttF_GR}8{u^th!SV5+1BA%Yi`Mc zsfbI{7^R_NN1&z>GW<))B@#xvspJrS;%0qP`75DLW^$4u@rUY_gJ~j1apUz-TpozI zv$qKQmP%ga7cu#%6{=)C*%{R-VoL8<(9V#+lNH-Wyl zF2^6#2hUwwpPME?cNTl|KA`wJPC>vXc01G&TV#xp5x=TaFYAGIyrH3ky z-y$qVqZvpLFjDmIfBKU4c6b(|VdO?7OY+Cs>^7pkpC^6uH3&Txhh{#C15n>rNxXmXhvyOq;1s3 z?f(7xRf8u(3W(pBk^uAHxmVl}W|7zPFm)h>Ay39!*%khhE`PAJ4J7}6XC`oE0mYd) z8?`D@Vb9^kqojaYW4QQ2$Sf?W7WdnQWc%D(sAh)i721{_cm7XI_f6Q?EfghYXm>{_ zp|-H)aaVkX-}%RH`Cr97fb0Zy(Dy#krUsUYl9j}Q#l7O5N1rgzc;5Gpj&AG5sI*ag zkM;5_k^(uSPb2V(yJ8YCbaRhlWwb{9k&HUI9bB;g_~GydYjQ{shEcBpA%EGEM!ET$ zw7|9dXS)p2Q4}U;*x#55Tam`8cHP*_@9Cjl4w}+i6OCo{q;gK z#_O%ol&jK@-`A8%ic$7i3PZO{hbKCeHvR2HNtR-}3H*ow_rWqdScw^ioN21@C<8|)H* zMI!|^i0O-Vy-W)ihufuznO*aI~@yRcTb>oJ!fl24I0`sd0}0 zR4HjYvY>)RA4~af0vlXSWIlCZJu^qKk!}7@F5}Z5=gjqT3YnV<2V`are>61z=h%VB z!C5aeQ*<#hb7G&~zm_dA4z{x%yC>%*mIOPcid4AU+lqIss*By~M_bMHGC+#^GecZ;%40t^<;qth9U~NZ; zJxKI>9Kvc(4x4EG&nc(|JKHdsPmXOe!!O157Q_0p<;DI7vYO1VtjZ4Rqk(Wm^}^diKQe1&+w;yM%=<%*NE7#T4rAr38V; zuI!p^^?+^!Q<5n7CeWT0((xeRf3LYOd(eYpI2S41-CySUNbfKYKQ{HL{(tM|{GlNq zf%7H=QK2!{!wnXBr%SLf>?h7=eS30KGy&=)FHFCNhVm(SRV8Es0kkobL|&c_0xfvT zP{C&pu`pq+d6tiR+b7TCcn-5RQl+<4Ojp@HJly0lHsDjguMZv z=mF|{cjiDe^=6;)V2QSsJO^iPty|<7pv>gt<-Z1pg>_2Xx&W|r6@VB5QD&r(08%PY zDiP{$J5+LpLwI^%BXnn^b&U3j+IQ~r5Q*tGvw_7@oe>|3_#BmR9!)}=Or4mUQlL(c zdU4_tt@R0*HTW<;eJfV{EVP$qDzmFEJ7Ab$h>QT(0}~ z3){=Dz;<-~Dc|5Xp8l+Q#M)4{E_us|<=78AEUmp(JngyL^C(?ObPa$=3dN*V<^IIK z*W6X)Ja;er>Gv`ka=O|Qq3=zD<{1K8fkZFvji;F!;sJK$LD@*&L-bq~6;N>I#r*wmdEyW~TdPvJ5P`OFz**|fhzvaJ1YUplB`sj<{2!tQ z%Azacf&MP@FZxyKw|u9@CJ*Wq}cf(}nQX1&nc*S@7=pY7VYgBkiuy%b~n zwpS#V@d@+>tL02beSh|RJA@t0w)b)9i=-=f8c!2AX~Kbas!K435VTa~j|yoR;jIiI zlB67ppqjoeyNlmf=cGR|xToq*s;z%xMR^5CA6O}AuHylO-WA^`zfajzq%Zk6BD8R= zFe)bC=`<;02s3;teY3f2F|e^#t%x98gwbm|2*2BDT%EhZ11*fWh09lJGT}z*dU*oD z{lN;Bo#hu>+u0Q&wV&=ShY-#Kr5{9m;#dvU9qm|@t4@TzZQ)TQ*MTJVKs>@t;@lL8 z*(_VxZ|^J*@nUjht03jjES>`0z;$pv7Bks&c(0dXdn~iGzXmT()#^3+_fNCcgfcu{ z)MItUL$i;pKreexjc{Q4N2!}`tfc%E^m(+dk>D2;rnP#1PoLONN2OKvtn0afEM}71 z;(Ky$ix0J4IWQ;921>g>WzEesC9_3@(D7o4&~)KoqRJh8&bj+K%M>@Vdo%rwX1;6Y zjK8IYPu$UHc`i#QQd&}cPeB~Gn5zyY1;@OhJgMozc1{`>t#D7>A+!EfQR7#>Y|g8^=ka)3USa*l0~ zDs&B$1dhwoNSU0IyqVe9N+4-U!FHDcjC%w)L=1_(o>WLA$=OnJDn0?_!Xo4`1BUT< z2QPWN<0o!zRwq;Y@^|kVS!C-p8A9TqSxO%YUR%vg%_ex`@rhAJ%z+iT0VfHg5CrBr zuK+Z0crh=0_v{yxpk~o}u&Xo3!1~elTMA?dXi8M)D}3%5_s7K!30d(TX{mZ{w1wvB zd>(+~bSzD)bmRT`LgW_X*DB)+j*CqxF#Kb~KB^r=I@*9aTB0Ir9f4p`4EXk5?;V|0 z1RER!agzp{&YvC6yr5vPBoHxvvbV_h%L9%c%Z#_RvG&%}v4yE>+qMX@Nze=iEtgTMqr$=UH zGYT$;pbb5V+`x>l?1*wF1_6etj#rvA38t& zPNgDRR8^P=eRZ)(q&Hk$?|=8LCKB}TXw2`?7r+9`)>Iv*{TB*c_A1iLmQ=1(v#s|; zVtZMZKeJu|UZ%;`>$&B|l^AS;6|{^`{j>{AzEcs(;w$pmqNNS>NIovxJ924-%&1{Z zRLl>4bDHR9X|fY_4hDXbTCow3&r9}Rs}Pr;3fm--G&Bxsd{H%|K-h^=0)hf#HBptvV^9cLVZ!X~o>+x->+ayr@mYNL zU|aivxyoLyim|O*r25jtU3jbDQLp$`bJ1krsT1eA^~{-4sjfcB2faPv9#8_$)ulPL zqR7IzXc%u-2gn6dJ%y(De<@0QDFX^0PrTcW!kjxqh33W?#Csl@VG~%iOjTd{K zKF6D{E+p(Hvq@RRMQHfTKOA2QR&Gk6rBv2Lqrt~x;nk^&h)n2GpT(Zit(Rh(YUD|J zwhYO-3=2>S?-*1kE-Ho=CCLWgi_Iab_y_l@DIsxVW79U8<4vx0N}Ic^pW_hEkMeMp zmEXto0!=tdzFKC`gkzX-_x!RdiNOx)C>3E*n$%PjA0T}CvL1#CfqOtbx7(IY&rtCz zoZLP_^LYY6spq`cw#Q(B7W`idC;)L8|Eup>jjAAO_j*^lulj_dAlg4_f7xDFoRmbV z&WHb1=YjkcIRZu{g8>Tz(NkTUgeu|d)Je7U=u?%wPrV@5bhddfsB}{XvNnG9>0!&y z&munJ~Cn5be2*S0AYa?0&=`?X%@g$SElgWB{Vzf*;6WxPvOZG!k+u&xf*~ z5~wM|2i%EhV0tKhvtUWRQ6a%Y(4_T3HEESe0gi~kEjV7rmJI|_z1fb~_<3?Bqfrw2 z_v-lid}H7D^&8F}w_EICXeRz?01o40`U@DgC*i{nk`o9fhNr`7K1UvnGgh*~2hYx& z)czXo$U*^#?*!lo2+b0B8)00CXZtIk$2$J}=?e!W?M_sN_M0WHrj-Hw4Vl&pK%zsy z?#VSpnhg9Aka|R}0z|puYF|Rax^9}Kz{TU(>^Y46kcL$1_X?02J!_9{VzDY{T2Yw_ zKVfSaTgIgGWWt-9zEeE>QStY$)fYFbZ(1`XKwtl8k%5q)d`3m~8;;ObkvvDkb|s^x zxY*ZBQ{`3oOLF{YHUmjM9XYcO$ErwKG0tJWvQIhI0TRU(4cmz+XtkS=WzZr<0BG!KZC5T~-4&DSfADr?o@vQ!tA95WdKtyPCl$##XkxQpJ*8Cb@MEmJ8#x zf&p3iC*0!~Bq}!ZAxS}oJq{aonTrL@9P$&Ka0Bm_7oTk~FFCuuV}}w%^mP5zIq@)4 zZg(Nj^YlF8@CCCImn$WY$Vx)M*#y%-@)%c*K6X^sRJJ?*3}a>|FD?jI?TYv8BLCNs zl&^2&UoEd4Es4?^D@0fQ{3PJy+uJ9hJK10-JLv7XXGeVSi;awxQvJ#Anw|hHPm_K2 z;-$`|d!WFkYM5ZRAxMB~l()D;NKlgd{>AR8^eg7^Ie2W;um=IA$4Y-LdE+QWPQGq? zYt#6jd5`wwrqC#=C9Xb34YU=JvppCap5!Lm(uKY@{!W~a0rANmn)Wq&v=w&u& zw7ol7cBw|u>%%s0F5gCPit?#_U{^}!c{F^iob11<)w{m3*ZoSaI42}v^dXbdxO}mi zpG_kES55%Ib&+*n+6kltxU=pRi=>52%K4m2u+i?## z3$N2Q!}jVjUEwJhN9MP)o%^BS^SVM`{fuww!QTq4z&=#LYJTm{(vp*&ynGJR+EJKM zUA-FUDBMoEr@&&XrOr|T;B0XxUs2kF7@99_eMf4WGsnxBt z&es`|DmQ15A~iPn&sdIXeMBdL#$DVBkutR;k5i zAi8lQBnDc*)&zK?3ZHUXJDK+#@qd)wZF%j%B4#qVnUj>)L<}uYG>!88?wx3aX(cM3 zIDgRM{2(q@j1*6^f5soZc1---=V2XCO~3qDOR%tT+j64Ab$>cNE8;?<0OMudh|Xk+ zpcxP_TDI5q!K^zzVIiAYd2{p@$);mEC%m(EbxSLykL#lH%Dy{3pfieip&Z%QWb^NUbKux-3Kgi5PyM0PfxjEW7yjzh@ z!r`wb(H}YPj$xt1*#Y@*^(;r&x)~v6gs$Ex)m*t0#l7&9~r_ zU0z{MzWU`Dc~o81XLuKRr8&|`6KF7UnI0})R)bpi5Qr0I`{C!-O=eBE$N*}GFshAjATm5Q~?&MLE;-|rL4DY{7s$3t`!<_^s9%tUyDBG zJKHwdz8JQtJ#{qKeGc@e*S|*0b@$ej_@o$(a9-ntWpKb3#3_5gK*HQhm{US2;Qg4| z_5NLIf9D*?td+blUBvZ+kGEhiAn{aTFUJJ$ z%t=6p!2eBVud&$q@4AfNTlh(S?yB$jcCIq`q3l1aIv=54J!}4_2kjtbHJG4d3Be zn3LivRRx`#BlvIYsR;t1{`Ee_nF6Qg?)7eOPs4hd_QFFqFbSE!rV@Pcj@l=m&J3Zd z!8rD3%z;R1{JoqsePA+1-c-JD?szoyb3cXkbjJMS<#m}KC=7Nz)Xyh9O4v25=j zog2cS^NVj#Gru5m{h`76`YI&cVO7CEQ_nnepjz7H4yoqE=zb1Mtu;H*0{x>|p3mZ% z?rrUUa!mywNu;F+M~nVMWeZ#&1s--d!<@W?eWf2-q)3mNNLzbzICS53tOCDsavOSP z=IBrzZf6f9;k?T3vKTx}%EJoYFb#_Y3g_(ZcsWgE_){fN`PZp32~ym?-$6YzR_>e_ zEohF_O^l1m}Cx&fkHsYi@L0?MJ=9E*qvEK?h@aEidP3Zs0D_2vkeiq*K1&~a4mk~|no z+{p{ld<=lpT*XDdz9*Bhr}sOYIS8GT6&1r$q}E%~p3iLoJ_e{@SS%g>{(ob75?h zanIiLwQ1Kx7$6wntrlkVJ5Pos{Z0;H?AyFQ@b$UgXO+5t3_Zu; z8|>m|)Cj+``rZe#Gg)W%bI96^#Y)9FvuVD9nKPSVYm#a9*LUkYuI7h>E3x$LPfGdm zT3<;rjldA}L!+?jY^G18l+r7C33|z@KJ8X{R8?7(fll);B99`xWeD~d?`TgB=(bvz zE!75c*rTSp%8GUjdB}V9A}yXGTSYOgB@EB(KbxnAGKcYB_D}Dx?L7aWW%hCxqLF({ zlZVp%biV;}5`XdWXYC}Hr^kXkTkDHvGN1p2h6mPW`6WACVcG1)v&pY~B^+cWQVc3! zrV{H#lr1BJ5)ql3-eZG5P~KS+31%9i;S$6}9QSA3!N`Ehe8`8on}QWhE=JG z(Q!PZu}yz4$S$gWWo~{CR5k;CXnuY|en^xDswLi2Ms!LH4Qpfv?3EolWq}3~##X!} zuk9vXwE3$g*={HXnnvA>+9{4MrtLrNoEzE3)KmG5Ua(a&4;?ch@F9?>BILvwZAp>JUfLS-;8qG?IJI|ahfIj0-tO6qlxhhc&&gl{XH;T z;|;2y(H5QwgRY-FZinemcm`W>@ovRtxVu?;MftYPz9urbyHD{Dkl1YFv27xYX2ca| zq`4P>bjfnv4%h0|)k@onNmS*(TX{D{GP6x3ZiB-s04J8P5j zX5)5(CU-@h91;CBq<8a+7hy^bw)|HR=*C!SI(n>F)Gp+{fqhLXm~=?B1Jd8!<$<9q zPL8_0Bi?YtR;oDgvPrgB0&(KzjFRzy8CwKvU6pNhw4JV)oMOLnV;03n9jeD^+eRiS zqv`~>;SL+mx30Oq3Z>HC5x%n!mI3lYFQ!vaLRBjs%asBpHNw!#B%tF5Gb{Y=v+{|R z?qU$rxT+31j?*_dVstb}<<-H7BN)IYWP|I)m_{;|Eg_^G1^9DdrFf89c_CB@#-=WV zM0NN&#{Ef9p$rUwTB!xk{=5nle9l{ zm~PLButPExttbsTzY$iD*!k$GK}hcFeX7h0t(b+sS!n(x7TBHJ)~V7GDs(vR4Thh3 z7{?N!H=|y2sof-X9?p1*6yh)}rMm|oNAT~&_vQdM_7>N^Dh-cSWRS4!K{YTni`(sQ z5c!gQ)$(V452E9Ts%qctx2OHb6BYT%`ilS6A{35mI+CZ`V7qfq$?jREL&j)2yRrsh z284LK$&w4qD;OO;F}0xhDCg8A4PF@Vq;nWnVEM^jyQ}dsAhady$zUNK1bgI8F>buc zMcQsFPr;m|KI#QQW_T7=&j3w=N3GsLofr#k%Si>waX<<`Y(Mz8z!a$hR7AG}mrN4w zWd0~n%+dhFL(XrA(Uwznyh@u?@j3yY?aa%MFp#t}PVVPzs`34l;$mX+qpQ0J1tYMW zI_R(C2=ttii8p(lzb=2wK1;lC0rO&r9Y3;gLcmjuIxam;lLeTC)8cAgt>GMp=W>1V z$mG=CRk)Z|=d)!vF>nIVLT#(wREqyEO;t3r(f;7OHU<8>26N5GdP)0e1b&ZT|E^kf zB72rK($pe#w3^qE1CA0?$BdAIcQw$&zAM2=Ug3Z*-Rs+F3u=0+Pe;tX%VnVU7@s=c z#6B11^yIyeauQ1Kp#6o?eiS&{7nz&8GJtcgNcZjsbBr{khKA@K#=j-?e6^u3(_xrf zesv^!)1INSH@g>M`TRCuHg7Ow@;freWT&i&;@#D|0sJf}%bemv3oiGuV#vmALMJz0 zXt6Q5B#PTDha|zkXy<6oJ9Ogc9TDSGqD90vTjgTA?2*6ULU1}~zq$MK zCiWV<3*t9x5zJ{L*yX5+ygGd0PqV8j(SS++(bg^_bbP9};`#uvBjm0l0T90e(L;F< zSh#E+9Dv-w?0Hi_YR&9d8_wpY(+l4}V}R*3#LcPk^sTFT0uk5n6?|2ZT3^1(a6(DI zOySzf7||wy-lU>^q;jEZMHG5N1BO8>=|7q%7kC}}EBr+SZF#6~Ou#w#stt=S zI`S82prBb)5%zSR*C)==UpKGXDx$_%=$xb1nC(4;1^y8nUM&HbLkDl!(BUbu`+x(; z*fqi5+3E_I#=4kXWZ}00E$zqI#LNe<{B*LOZR*g9XCP#v0^BXANXCQH9KobHjZpsV zX&Y6=W}mXyKx?v{;$T6##|NS`ZvQ!+ultQwN5>SC{y&u_Yb3)CuKnO>GlBH`vnHtsFa7s?;G7BpXb~@#@t{|-8t01 z2$xdyN|tRL8b!<$_y&dV@XW=7v6ujLNyc+u;Og~X)lvYSg1-O7!O9IEzOCATjx}W5 zi)Fe-97U?(Sd^UEhp1xKuVxQOMZHQfkR}OuRs2qUb}Gj zF-Kl!yz;n!R1nviZKi_7eGQv^A5{Yn`JG{w;F_&+VLy64UEZG1T~s_w}~SQdlAKtoU zXv_gG3owgukwChuG87FBZL9y zWwYA6CX>ot9G2}FkT(rxF47`+r`%n6= z;#aIgCZ2JDcXGt&Kl+|jp}cFl%xUyv0fLLC=7V5HO4G>84{MM0JO4lJz4c#J-PS&g zgi<0UA>AO|NK1oANVjx%cL+!c(%lM3y6Fb#5K!rE0jW(lymRY)pXZ$C+~3dp4?OORvkU9&1oyJV9K)YXBo7`vHl-^G4R3Ano!c{( zHp1PZ8Z$piY8B?PF8*oGjtlr+Bs#pe67WmP-5H3~O236`$o(LAaT7yw?zUAiRcWiC zFnfYUrOM<^-M)7jT@!$=Du&Sy9dT zM3&z3)r6&FakGaXM6?$i4e{xXt+lWdWt(_|{{fMjb`0|4cU@piT7%>2Mt^zAp!UVo z?^LU5vsiz*W8bN`r%`4Oy=I$TOLU8O_daQXLE5M0kRM(jYzQC0>w028`sGuA1He(A zsVpD^jyhQn+Er9;l-GZ%N;`e3P+s&qp!mD63&;;2;t|V*Hu^pT{rni7XbWJ@Fp)rz zg2XQVRz>dHZF-+S)B6>d%|fXmI28d{0WNzhN}*5@ezf1se8L{HCo*aXKKj#zMxTdj zW>H}78SCg{=>Y-6z)GuCzFkdY(53H43wd|L04gd|y;5QaPRQG@n$JH`u6+uOZLo6? zq8O|)E1eLe<6!lb?})}=r7@Jxo(*r0biPhUK9raY(AG+PO+Z__pD&0)wcFhN9C1bM zsH=QcTf9sVt40CVt%lnS)HC>JVmiL`R_Pt!=67grePM;l2E_nm-}oH%7x)Q`OQmVH zv5x$96-Bz|GPQA&A@~HFUJz&XC(2ox>FAN5%fiib=&Nd-tBY?%xoIN#rIG{c2jJ`x zHtV5J`7fZbYaAeqYdEdF>!jJp4r#J)QvdfNP-^6z`I6Cdb^1QYvRO;^>gyfvnol;+k`1R2 zaDtw&q+e0KQi%fxN1myuX-hPgi6%N~a7A~%4ct!x#g32rzQWB^vwzQN^=|8*ZAI|- zLuG4|X8pk{cUaxu8wY08IgL^tjh9HC&Q&VRVhzr>-AZ+y?mls{ju>?VRab}@B=A{J z@SCj^1#6TMaOzGp+_jn~A28{Z8+YDsU;X~AG&PW?OzQ!K_K|9cU&yz)hN{i#%&*0< zzC5Y~BJ1HbOyIB-=-c=h^azWGY8@!J{|>L}oDQi$)fT?K0Q&BTMc_{9*PKnMM@`QP zkc>ri>T%CRPAMhI65}*(__=c}pMV(qI#S2F1c`Z-XNirBNu63A&{&gc^qvpvhu(U* zW}X>gn@z1Roc^ht>7)A15m-x~QCy`Qi($L%Am36kBI$|u<0!NK?h}M66iG+pxg=t1 zZa@khDEYFYRVuvo?RISd$5DP~!8gsHC}!$MXCx?xaRM$XmJ^2=6d8dQlBj=5Sv(4x zV7ql)2c*vqiO7uR&^X`gGdu(hCpuJqd4wkSzLQ8CNLyAa@1`Q{V6)h^;me1)xUAK8oMRAD2kIDDQ21C#xIO z?FkO~O>jH-T|?B~6qQx}b_U%8csOdDa|}zAg9Q6-uuApq>7SMY6VnjWzmH1K!K%zz z_Q9s!yOGYf%;zV#TGG2sBspje79_lFmpHD!7W`09aIVp%>!9mdYg1}ftTG@Afa{S|C?aY>vG*-fdn!1 z`(tYy&?vNEG3v;X1mO8mdeHr;972?TDlUf%mJKCV>OK|{3}Pq&Hw!BqJsP%qXIN)I zhoAbg%>Vy=y7&&|LVCQ63=<_t?h~ma{R31ZOhrFowksSQkvl7~jQVfY?B7uMJ-u1E0rprv(d7hHVYH);bkm`W#zAh1?g)SF7>VIYIxR zJduz*_?I#e1C4vLZYh2J6W=N&W0zF64lm^W=ZVHE#;8epeeOr~wG{oz?RNKBBQ&^0 zfx5($U-*aOk~*Iz8=<^M4b36AcQD*L$bSz&{}OxOAl}9Fx9)|)LAc|7=?H9p`Kg1| z4c04->>VMGF@9f3?Fb~{@sbah{uhG+xHleV{vQmNCj;AXd5a#Q6B4AiElC7+_zUG< zS&QQu1bt}{k3_ZOzrmK25IO&_X+r4uJJfzh`{PFjX==?uB!2h02z<%pFR0f`E=y2n zvT!_!?AY;f`w7R7ILMvMQfj$#*M>!`n~`f@syQ6*Y4JHHmwQ$yD2!`MSXda5i2NyO z&osLD9pd82mRWLVvJ#~5xNRKdLYGG`o^lGGU@Nf{c0-{vF}wL}|s9`FhVB zos=8zg-B{p^%yQ*RL%P^rEac}=6e4R_(lSfOyvjEy$x@?&p}1t_J2+`gP+#QSy;z>C=Du;Q@I}(HEINC)6zAOUQ}i>k z1U=SXuaP|)SW=^ZW+orGy!+=9Qhziy^AZ;~#7t$GUsy?tfJJ>n!F(I0_hh3C= zF~Z{qL%e})&66eOM(t(c$|60KNkv>&e;RLG=izSWM~XjYII*W|41ZQZfAIZ1-D9Du z^O5eSam_c9y@ZjYK^+YlAiFS-&e}Q#F$xJ42Xv(JLb40H5^tb-PRIa0AjCK$JKI2~ z03Y?{ctj&$pQ)3oFPP|lJJF=d!!9=YHGIU$#i9(-D`o*CX*7tnA}Pny!hCJBM7R}z z2`>+(w&{(SS-dZ*&d%1(ySBq zz-gK)e%!0)0YNqR_3Qwox9{&I*a|HARjhW#Aczp<2@gVZh419jC0YyQPhJlu#b@{a z8ke7Y)`wqZ8z0`4bVwar*{_Km4%lf*(|8DIJqhY^3qW>mV^0tSc~>b-uh~IqHF|{I z_Z`{Wcze|bdPUWK#9pk3_B?x6A3UD85z|7lUuQ&B2W5hKVO{dWQqg2RYDw9V(Vq(;2ZUAa#VV!TKk4CWO~mc z5oi=X%C+~kdEngY_)UM{2fe>~h$Bx;%Lbl!C6Bbd497l6d*B13mC%o5rPCGgP%KyVE7PQb{+xZ(2DA-;ntW zHw=i&MELpoe&fQKe)jqLVbyu9cGNU9wQpET0?GW0>>ISl1*9(@zk5NPK6mq!OuH^( zheqw#`FAI|NC_prWWLIa3w+m|g);Gs?R4eG^{W`5M&_Q+edub;lZhL3YPU2V9)}*n zf>&a63x{hm!k<0DqnULA2GqDf$HTCzPtR7k7?h!I|9-HETwN{1J6u~dk5;L_#Hlg7 z;ALPCJRUbja{ep`2^_D`UtbpkK3kcalUSqqH?u7I%#$+uUb#jSZ4Y{zPTC2lXpLet8y6IjWatR7ioQF+=!b68Gnu%

v&bQ|TdeD3LCrVMULYO{= zP`{{sg=}CwCV`4Vqm!rP7FQ;D%KSniDHanu1n&h!lOY?r*B}r}Ms$ z`(B3g$oBP-@Ayv&YhxIBG;&{XwkzV7e(CJt@)X-# zOvc5I^wAZ?s>*i~Z;=SHHrqmXp8rH< z6I9+PZh)HfE{Jb(x;&j2A$zlV?KyVBf~>(Qhyq_JMHV|z{&>sRC(`x)Pi6axN@*^t48DY(+6u)SPE`&CPucEGa@kp+^#!G+ za)W-2DQq7WuH~XC33;YgMvqDS7vDD+q>*gb@-42`XfV-hQ{OaUkYL?UUPg3R+@I(U zs`wf5JW5o?*TND%rxacB2P!h!+Ji>dTJom@ixHRp^9@Jsg`D#nHsP60mn~cV9vzbJ zZYsiKD@;6$)_zs(YvaB4$XoyQ*;WfSs?1i}j`*Qj|1WLktX|{sZX+V^c8tEZN?}9` zewU%B%po+odBko@cVgeJ!&CLNDbj;qaMzD&*m`~-;%VY|j3Cz8VfD{`YoXG<^vSy` z#Wb^?^(Ga!otub0jAj8gJ}NH3m!p`wP&4y{-2Enhi^#Nh#AbnoN(Ck;g!4_8cUYvG z8tjanIP3$)JHc$!G!3V6=i$(u1r82K!W|X?q*FjgEsf2R}+U-o2qUxLhA<&?3wVJH2rd z?PxqP3zO7N6#vGzj`uDLvJ4^L2hb7=N4=cqd1(k02Bb&nZE|T}^2l8yl9jTtz14uT zm~_Ch?B-aU5DN3XP`X?-#Zy`bcyz@vEh`qmEg=u=r)u=?Ci!bzwx%Ke!#SptU#60lHF4+EAgSflBgOM|T2=N_p1{dQ~NQhACjPv`U{%YkzG+-;<>x4zI3@aT}YzS?A_Q{Rmm@i zRfg{;vLF9>PoQcRx2RP6P69D0ndRQ{GWKB!^2=8my^E6X#??|aw$;EE)MxBw1hP8EPC?h2NfA5)n+i z6_LB;Xde@}%Ajj8A}^58KrgtitAPwJy2^(ij(tT}3?Gh2nc1V~`wigfOS>H%y` z2k#sBEbx;eZ%!_ptf?k;w;8bn1~uHW^l_ERDssIcD7DNtA9)!RVnJ%dEz~NVY@Zsszp*9$A-uS?0Jt_RxAn(H)aHIGF zW}Dr`ox$IGh<8?@bYFNka86js3j#{ddrFza7%?D$NmG!IhHt*!K^ITopgh2klc{IP z)nq4B9Iwfh-zp^(THdcF38f3X5_s=5l-f&Lv+@PbmWZ@o{ZQe*Uuf4=%+0w1tWz@n&{1>Ng6Y{>=4~o(kJ_)_p{wOyj}*IDsM2|4`5Xs=ccufK)@>}H z2}7p5*dm%9C`h^>U<(}5d3k=Lh*F15lkk)#|G6)t(?)3Jtil%yLZiW-UxhJ9l0KU> zY==EOn7h4UdhXd;czaRx+D855Z2!(B;?-?d2P4+niPuRRw|US!t+1j2ums0ythEz- z;37)J`gDxo8nR;T?&V{l#nb>-gP1QLrUxMuZi;CwGYw`>4kgxAC%57B z%g&%C6B}mNR~6ecI5EOj;_>Dj{)nP*gY7z&y05P=a9S648oyN`%X^(wU(Pu}#7AQD zX>}XmO(Rz;O-((TLXXpGXm@V9eZOp#XO?R5TvAiPl`#>dD+`>QJKGn$+=3X`I71PE zrjDd3<^`|(PzUv6P{c3@{A&b__?Im_zoIvB=92<9?YMIwhn8a!aHK?abIr8$C z)ie$LopL(jTdw3akIKB|en46tukQ_d%wsbv%=Q=YTfP(Q-{ij)G`#!;R%u$Q#Q*p7=w#`SV8Yk>zmmG>&=ngL}(9s%2K^CED+nV3p$8pLqnFUVs&z5Ar&CJkRvTq9s67mY8Q-(SCz`qdvcx75` zy?UjM`%7PPvHbKHizJqW;Jmhb`-IiUJ5PAH)!|;srB(_RAEd5vr-g(6Zsab_k9#r7 zxK~e{$DqLdW6f)u*pAkQViM1l+0kl8o=zOwvB0vN06O6Od|2soUV_h=QZ${!6WHpG zi#jjM?hGS%c(0+46}yT28Y|kFXqp)MMVZmGTU2s6Ut&M6kXjpm*_SRY)y8{s_YFg4 zm7~7@>oc~hfY6cu7CId;-&*Z$^PlCE>uP>6lWuU`wXkA>7 zg~(+4MdxVf(LP;PrW9`;*}n}m9h^sC_jj*_2q^rGyY8|HwvP7Nmspuf&FV;mTJeDu z(B!V2rU!Ep^~&;8#U5N7tFzbgYQkno)|y37c`|d!0H)&>%vyG#ni8WI787Hk+K;Tg zhvMJ7rDoDwZWP<`H8pklm9zVeE}!?fP>9xM&Ln8vj@HqhkdUQfq11+nJm?? zreUJml#`LPeD5wSgUpZ1a1?LybFrB#Q5x^)srSme)QJ&8kGvC9h_4iI4gZ8kg@M!Z z`{wOIDfR5SaxvK1e5j9Y>g33fI_ySWg(T7(U0k)C_?H(+d&voD7BXr)B5pq!O8U#5 zi}TrgW6Nd10V!tKS8q_>V#wAzs)&WJaAcL+iys)3Q$4?ZMzMTUF?Xg@ejO0S!|Fk* zylP1?IZ;*$nibzE)0U>y0&8heLTd96DFf^`sBcaH5BRjGVzE)107|B;fiM(8m+e=B zsCR1`Z4sd}2P_KNH-D~Zjz{Aw`-9?7J6`o*Gk8{*1*y(iApO%;my2z-Bk97o4fN-=+x zXy9E|Y?p-S-#-Jo&e5Cb{xLus>_RIfPv0U`*ov;sCajG%*TPUs zOOe`r%>Rp&7(AO@(Do*leJ75i|Hh+RLFM*0k0MTV1r@(w?Gjzd64=cIjkh0OjjYS4@`esVj!Y)X`Qc_!6tk5)Uf+0 zC-y!|lS&c!x6gr~9Z4z$o?;D|(d7_PODS?wda?&mmPcCcs$zP#f^10}_v-2VN2`d( ziQmi19(pat8fM|K=MP?HVAPksR{>VTqq{4f9@7O#{hV$C!IcsQ7jn7q zv*zkO(cI(Wl*tY3KAI1<-Q0r+oaGx`{P0+wrckDyqv{Q2*mj%} zdD>kRQAGb*ROCg#$75#!#e8Xr?aoe95d2o?I{}OOla1VflyJOwfGyoL8OfM&o^^H0 zi^pVin7{XuAvBx0#Y?Rtn4vsBI6-IL_d6Pl1}Tnl7tDuJmnrk8k;Q#4rkQL#IC{Kd ziVKwkTU|q5FdY6lfXUb{PQprbFuVU)?r)R~a8-C+z@y&K=W>Y;QsmpAXU7MIsrucL zcd*cofSIc4?ULAak3Q4R(}HhnUoRFjySxagq(U0xN3cZRv5pdw(D4kak_N;{} zm{F-R!K*S?aG_m+Ih(8o6R_X%QNCd{+=8dr0Ta*`>>hdiO7@ zz52_;#DvA+NIjVh>8vmuDU4Bj56X_1$BQWkv$+X5;=xEULRO&k`tGG<#JpKY?JjMY zq(--LZu7(4nJMxhDn}-m!&B_Ku$burM}Py{qCaF%&vjM3vj3>EfkpJh+GHkPbII-C zRlG539CCz&!;7t1N=$N_RKYZE~@ox=LOLe2dIQZVOkB_5}fmr$W1-*fO55i+xIZP_IF0Ad` z>)Yhu-P!#DW(E6Kv6LKMt$FXZumP?XXQKV(gWD$E!^rxe)|ZGk6}qaR5!t(4e_y@$ zu9EQhK4A!5PzYMjljHT*byd2+JtidwJj@A@1}o{qPQL^kW_Tm7IL@+cWp@PskoR&c z$63w=rz>=ptT(aSs5`04S(q?Du)#S9+$ZMUs2v4bf&B2DiJo#bhl<{W(rxpme73Ih zE;4*1{R36wPc**3n$Wai&N)%;oZX|+_#w(p51vTjT;~R$GKZ_^?Qu%DCp=Tw-@fA; z_r6{3c{sTdrp$7)Fc$iz%}=|2I2|g1wrfH}|5^g3cBTARr4PUbOc7Ha!_@ARH6Ih)1mLMM*#eu3A$f%9OcV)f2xExb{?Nv|wQFm*>5UcB@X zhsS%r&iFro=L{+#KlsU`?V)sxY)V~w3#jIvQEDqcTkaR3qYjHDk@Iol7023+3}X<} zPOV+FoFr!tp?Qx6zu1^r>89|D>MJ@rN$KJleY^ksbt_3HPTQ6T1rmoInBM}NqkFhe zSe$EM)7%I5dnW_Ft3Qq#Lcr^bby}Xi{DTN0di)*?=8aF@rm=D8YIpbKqMkNYI) zW9tG7a0Kb~%fi%~2ExKySCSAaJd@oCt+svTfV<)gUx83MztICRR1+8*BTx4efqp3& zK&G}SkJx)KAi!7&0_5E$e-)`z8*L6!Nwkkqf)Fx)Qh|`@7HP)2He{9hpW%%`g{_x; z$|#KLsj(?3hd7WIUtidk4h(a63A53X|<+p8L!v7zjBH`h5BSlGSJApGl z%PYC4q1|+{S?Jrgx?b1w^iBkzQKU1MniMNaI!SUJo_4E7lJf@yJJIfO0WBiw4u`my zgbWO5L+bSV-fTGNwX6Ow_&lb9(|JvBSuw^+4aK8n?y_vgm}7574{x7>C`#u*9u-js ztmWtK3&DTI38sQT4S!jcheIV?p4)&PLp)Uc{-iNNmlu<54yR(oX@Y7*)0GVs6*bG%|e#1sBoqG|5H3l5Ubfh&| zQf*nw;oleGP7MZu+EnY3)DHE5izr>+Qz0SP%NHIE1=Ia%jCBQAte@YTS@>C5x5_>v zW4_7*+p;@~0&9}Bt%k^>l!+UZjoW2JC}J`S7kj#zhVVYVex= zh|michLWvdT*J;(44^~+x%`I`;i(vYxF(Cji){$ej%8I$7_@zdofTFi9v5eV%Y>XV z&s&K?T~ogAx+e(Tl0S=8)}E_%|49ryzvCmtohIz9Mofw^k5D=aRe3j>HK1a_y0X)` zn1K7mrVKIRg=YbtndB2tI={|BWAPo*3NP#(ZbfYS`JVPi!f%j0x(;4{tV}ulLKM9v znM<0L@r=rF8Jn}5?n8p~j`ooc8YJ}Xna-t`+7#w9S*CCNr|}>svObfWHg0RNAg^Pl zQ=5v}G{lX&->@<+UuIg_?Zk(>dfX^n|$OUc^oq}H57_=wl71lRHA;eUfV|+vgedIiZfaX zmX^6)7La+FgDgr*TBUR@RXdd0#i~CmI0-Dww_-YP96p#tOoUk`OJ_zgh8C(7v|cV2 z1sM!dePOX2Z)^Q9WaC`OvZB%vs54OBoa-i$f1w@RoLdNTS`VOM{L1H{6tpH>g#e>l z23+sQH}bN_m%A;<@qJ}t^sUx3#bvxdR&FRhoRNC;I`VP73%hAuX0w70^@9~lBaL(r zxH+ta1q;Y>$R)#*mfO8KvZ>`XrvVfn#cf(}UWn;fF@t!pepa;gBn|kYSV+KUt_h=7 zVmiYSKNLqt?7Vz%2W>vwq$G;OGtTA*o$=`59`Hz&@-XM~{pbs>j0PZh3^0Eak9m7T zuRY=d4Tfmh&tESM$^}qHFUDkuZfk@J)XGm>^~L8>bx2sPIQ6|}cPe^3DvZ{7apZue zD3QY95pY`QrpeU4W@)C|Lc46ozxG2BLT^<;z$nkJ(`h;)j5Fc4J+t}SoQe%tzQgW2cRSxn8h#s# zqL&>YarA8{WY~3nZ_|^G4HLDVli{SK3b1djS-vXU%a_0GN>SPqUc74bO^%WroF@^e z3{B(`y_2 z>v`>M#0F`9mOf?8esK>~DF7Zf2M&o|6kL*90BgpPgJ+$^}*;NOpma`lSG--+1hCe{6FZ~*s1_9;B0MpHC0))(bFVUE0`tA(V+ zyEy3j2g!K=R>`rcLb)af7Dpl*tjD-PS#d-lRdhP2I3 zNOw=k1)ye61qq5}gLHYA-nE`s7g9E*a@`tY3(6$QeiL}pW$#+s^V?I-%KVROo zk=+IEw7^XY9xOzdzapQ4*FB=1TJa%8YQ|Ygqg!}(z_g0uFofgE4Jh@%#i6d*&AZ#e z^W{Q=L{3Hx`Zo;u3lh1T%>&sdfF#@O{6W&5%WU{r&hUJxrRi$B*t>$u)%6v&1A{`4 zCYPE`Q^tEZ3H)G|0@9BUJZbdBUd2TPA*sN}jlI>}RAWkF#OrzZNVJcsR=?FpQY`SA zgOZX35 zdV~_PVpn_o-!7QUYdJib=BFj@w-3Pdzf1~wmlptn#9&tV|DtqZ8yp{-N)ezV9~6&g zNF2(*deN>NPT(rRGO{$}tc9YqC~SiL)S3(GXF!Uv#P`eL{34-rPjA5O^zU228(JxF zzm+E`NVz!zmNLA} ziD8l*h|>#|_(;ZD2nQpUNOA)7VO$Picif9$?Bxk+mH#a;O9vX!;K2<|8qQy)0@w~| zadSkUz-QEUYP^A;fpKE1N%TLgj#UCi-L8Y1ya{2XVn%KLkCdn9eLdxZ{HIbxnUgroYZd zWY}cEgXN-rpVo|z1kVVrK4&!(A;HstE2X$*#UikR${NZ*b*??Q{}`5kvoQhi6ac7D zccFdVQ~r-1|F%Lou*4$2i2lp}G)%A?MZ;iAascl=+fvUs`JPjnnv$xcr6qXdug9R% zkPmW{@(^`=HLLTRwc%jVkCt}16(seyqo;REg4ZH{4bi@EXGNR4kU=Go1+R%M-p#Fm zv0Y!=5BNMUgAPa4M})(3Z}_QnQm*_7o@7MK7m6w&mFvE2#~C?s^ZuD}fB*%#!{@eW zWK#D~KOmP**W=)rF?T0@#L<<3C6;a$?{LHGUg7DHovPzkTk6Ny0I}aADr7c<(URO> zx{@Tz2N}{{6<+&0BzyGKorwqoo?)?5sLgXL5;F07sKF5cswjQ>kbUE^z8e$EXihEl z@K@jwK5PC^2^CwlJsp?5X77l=AqOzknDEJ~6|4Oli|tYD_PJ|9A;fh|xS26jUGVq5 zdMmX*xOWg4N_r#Gv*nzu1lX8l=-R|C_4@v=?fER}x9Sfun~vZXh&hmm@S?>J@{oKz zP!&JIAzfoxOgdBhUES0>S^k2WjGp>>m>)ASrE_RGyYRQZ6w0UTTl2BYF=3 zEo=+U$ZI=^?9BKysD@=ZoWS+RQ6|b%nG72RP>6@&;dbv1Eq@Ass%Y8_}u7 zD&g62c(M8LYgf4n`3uCIW!anFVhY7QU3!0e$7-8vK9KVJ8V8P=0?qsEH)t1i$h(>2@pWD)}!^Odyv`A5mvC^Zp$?E^<(N$cuxb#l{L?Hlh%%r#IP6Mo6Qd zJW>~exi6Qj(BLqLscijR=yJvxO)<*?PW5#-pw`e=Rdidf2 zD4{)p5g9_^bLpO@P)|XQx$SFfZj~*HL8j z*L#$E<|R@>NO0>J9XJnotb~Z(G~nFNbE}DP$%R9z6m3`WFfuq=Yd@=sW!6dgQ7pB) zAXPZ=u;HgH-dI=+S3yACa{uZ411wG`&dw|w4%T$%yYOy(V7&am{y#`(;bPfY@F#ZK z7zkH>)9756`Hh~z8;cTt<;~7%rhv!V@`Hr3PP$?3a{2k`>NT8lZ){3#Gu~q=U>un6 zN;rh1ZPc0#zwpV>%3ch)u9OKBO|Pk;?~G&YHq1u*MGPc`Mu}E-7hm(*tezqy)V%Fd z?_OC#j1A4!{(b?^mX$Ud@Mm2Zu&#=LNEmkSxA+fXkwmQ_FVU!ad7I($OvSs!P2*&h0eTop0; z#`!*T6NcNC!cnLz^m|hU{oP^`$rS}&cn#V9$yS)#h!Dxsd2+%sHz`5QO>ryWOD1Ha4F);f*+A7uIW9f1Kb8XTF*UhE$XJS;VV zGCojg@Gn66_t);BB1!R8xf5*9h zJr=VV26SpFckut~YXR{0U~x63)I#t-3_=MweeB{y)3*QpHt=BNFkr*~fQ0{&Cb$@8 z-^-Y;B>!RbqF^f_2FLP(P4d4V3-%(gC|0J^tp6~2cmTGj$)lQ4{_AT2@VNjO{NBUA zeWa);63jA?oso8}#oE)Ky#!S|SAJJ$cNzSJ*B6kqMCICJp?jV`fz@)n+@S%^hHmI; zzA-b{?xttyqU_}0HmanI0lVrYyVLL;Um5ZamwB@ zDDUVnW%QTJ3uEQOEV1ap?Yh0l zh;=!`nN8u=g;(wykxL&Ip_Wt4H~1g-HdyqPFQMaW?znG`F|?8#ue-FupsULd291ve zgsy+x0X)$2f_Xb%V{+rBF$@21AhXm0;OKst2kq~rAkZXt1FE^j$_7?M&{f-O4O{jiZttp&py40PF{Qxd2-a71DraEZ;Kxl zmRUT?GiQmSUu~o;(es|7C?W;Xekz1Fj;lYJt36%ov1x}0j?^3VFmvA`&vt#U{B8^~ z-AWF%vA?%TVoUheuTe-cG%Ug|n`2Dlp!}VqZf01a%QGAAc z9Eo$v0)+8WFOdcRCPs`KPj~~C3k?5k{MEMrW@X>a*i@um3^--p)(rob@ z=ClE26Uuln`MqXe8V1_1i6u=Q_4?k4=vjK)u@JD|zQASF6@8{iOsSM|+szv`?<&gf zkt1g5VlfUkBAEK4H7c6Icq31=bxuZ3t|x8Z{b^z3ZH?Edtb9rXuE_Nfi!)wn=gPQF zf^azho)DDI%gt_!z%wq(9b^o&jwX$!XEb|H3=Pe=Jd1hsEiO%@J}!+gz+&_O_)u8g z*SSUj-0wZK^iEW2xxaNWI%kw9KXFL{k$xqG!yO4!5e+Qrz^j)$HekKeM`5gGNe|F{ zM#X+xMUztHkm+Q1@^n4%gkz@G1GVXt#7!Ox`}si1C$!JT`xMQSbTkQA6amOg5+0No zSj4I&uP|oaNCJ?B+{!Y`ONA(dl2Za#6P@+#&Ptu`2o1{AgHniNu{O_FIQvKvaYlvj zq%HzTkG;Itb+25Fh7#{93PvsZiIi8XL={`kmu6whGif$xT{w(nv(|CAeo}7H2B7o9 zkMdZAMur=^MQ3V^o0CT~wWTpn&ytmAl03H6L#a4-PCOvJY}by*AFy!+kJdUdt%FOa zQ?Nmw>L%2zq8m)N{X4x>{-e%;ez|W({(Ixwyx&^Y5Y|R;)*TirT?F%-*84uQ4BKUD z^NNKVg(T!K*zf6TUQe^FUyyjmNbMC7v#2CU+!@{X3$ho!gIgcK9f8(ZUHw8%VAlIu z0`0mj4?D3<$r2EWUY&*301U*E;J>uSw2o7QMZ15wE$pIwfNM`Eu~PjP*>k zvj-*oJKj)vH=X!(qd7HGM>Ts4tgJ9gwruom6xld(DMeLVD922-2Wsb~tGZpR*KTr| zz}f1FJDuT|V~S`jVd-1pvSZ61eG75~CL%wIBZUODgs*&iEgpOO1AQy1Zc}cmOT3C9 z!CtLjWPK1rpF4n$^MD~xQR*$gcs0T4MWfBhm25j+@r_H z6gB$j;3D{Wk=m{wjUbl5*pxWh)w9-Mx=;8~zB&5%&GSbzE|#NHSXqB1W)Nvgc=y@` z^34rL8VRFoWPdWxhQ$=HDB%@7zWH=1?{3N^yWF%KQLi9BD%Lv`% z^mTH> z>p@wdaI=4+Y_}Zad>)%s=c_sv#eM^o*nL1UWazQ7P1Jw%+d#Dco!!X%ZT_3_ptxCK zA$nI2vy0`WWrb$ig|$Va*gW%f80U0~3Cb7i!#5gR--2@b3BNWp1Zm%&ASc1VmrpyS z5)_*rUZ=c0O)DuGV#`Uw+aydQTXXpXS~M3t!6{Z(I)ZLW(=speEw3gk06#xZP(GOw;aRE(q0Rc&D#t98LKT-b<(up zGx#%Ndzep~OSB#c$~G#FJ`Yf65i0Y&LECt=pkmIFBl|W`_$?h}@VU*xlN~*g&hNL? zwNHmF)?}rY!#x7di^{cXD0?0$YN>Slh3SQK9TQ&3gpFBy^C`}1HTVfE5OGRtElMD= zoQjtYQ53{_IG^KWmCk$?*Lv+fzLD=>km@}*bGMo$KF~-F#tAfo3B3u#kZ$Fvp*DtQ z>(5SIWlwEWJ~4f;uIjHR_J25C^_J24+Xvxh5>Lv47d1Qlnp*KvOexkP8J+RR%_A8x z;-LB_A;h%%3Ec`N8nMX19MR^F0<1dz)1Rhm&wktd*B)Rqg!&^P|8WQl$R98iyO5C~ z+u`RqkI^Fd$zmqUgSJt^<&Y_A^ByxM;Yz+H{?z2C;DQ&L&k5CFJU5O{h>prnA0{y5h82p;w|4 z3-}KzV|j=8F<#+W>@Ku^;>fg|Y+ZC*`lg)hD)yu5j7|`X$~?{8Pa1{w{;5^W51NRj zuC@(`RcDNx0co*|L8$8iEm&@U{Y_z2E&p|YR>3#%cUjW=XS9qpq(AE7e=uWx9=AOy zs1z-6#&o^zBGfG9IVRzGY{V6n5=33=j@R_WlV1Y)NO$r}SX)xSB)`a84w{J9TQjy( zm`%NqC2OgLiic@AI||2ZRF9Sy;v^w{+ji?W>p_r(>)nk%Qz&{5Ru2pRmz~CkHzIj> zFnh!Nf{xxi5{uN5Lc8>KdZPENOcg%A5R8pPuwp28?Vg~-(8@D4x0yd{mjtI$}dMXUiPjF>q#nZ?< ziNRTee}?7OAFJTlyXKwKQ1T1j*QeJt1H(c_s8O1TlYn~W+c=}5V{@6?kb7++(iJT< zukSU1<|rEVwMaF$dN7U7^5v}ORZa4@kG96^twz%Mk7#{D=CAG?1f`?UpKup7RXW&7 zxkBF34cnbs3K-R84m=*9KdXsxf{1*$H6Dk4U(z74?$?cOD`y%@;}`riP2JP2rpqb? zss{KiNmee5enms^-cA*32WuHcT3%JBA2&<3)4+?euRjI(S;4DgDgsX1Tv+Ti1XcRC?Z9v4&KAVY}kEoBJmdCTeB7Ej7xtnJb;X3y@;_^-lUTqNvXVoAAzoF^_s z5S@4D4d(=pI1A>~LIGILtnpGKI)QON!8wG04smnXA zJpU_V6TJj6<9X=cL&j7M)TxW-@X2ZuI5zPxev|dtwAat4#sy(muRf}|!8sSeJ*ZVL z9zAa6WDeyH&mU*j&I|T$tLK5uKZJAWw7|5zMlyH*3+??KD8b$ceF<0`dws$5|BgxT zy#cH%yXKc^y?=P74>%~al8!%8@ZX_j`)e>Bbae-}gD7k%Ak=yD zIAb2LhJQKbe@Xg(!~4J0`%j(tKjr*??7h1O0jwh7DHK{#aNv)uq=H1b*z2JG4=6I8 A=>Px# literal 0 HcmV?d00001 diff --git a/assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/2_random_forest_best_feature_combinations.png b/assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/2_random_forest_best_feature_combinations.png new file mode 100644 index 0000000000000000000000000000000000000000..4fded68618b9f58df89feec7fc8c0cd4b4dedcc2 GIT binary patch literal 53752 zcmeFZXIPU<+cv5c1r-rcSV)&9QbmvsDjfnSD7{KmN`MfMP81PTYG_iWNs%HTy+u&z zK@bF_Dkbz@6UsNSthH30_c*?N>_7Y6&yU4YOzyd7<|^lPUgu2Eb#+CuV`q;YI&_Fk zSxHXw&>oA-?8hK{G}MA zS%j3-pZdNK*Zp%FM{W}yzkmA7@$>RGPtUoWmj6sFS8?uG>V0Cl(`T*{ozCErzcJ@m z^ZXo%O3XR-D-nyHk0h4a*fK2^8)(rx)_GN*-n9*1TJ^M67wqUS=46qOkTB3ab>+}u zqThZv6H$^tMCpF}E%+;9kbsCIfr5nD?-0Q+KdvwyCZJL${ry3wuK1z+VSf&Aii8hE zLWMr{=jWM24OT!<^o z<%d`yqOyN}9wCW03(7x-278J5b0L^d8TwIhB(VJX`Tyl+M46^_do!s<@XmWr&Dp6O z=Qm=R?`-({cnCpl!&r zJt2_+>$aY?R-J+phS;Ghx8$7?vnT1)+!riAsy@tJ{Nijl$N9s7k> zEt|XAenLLfdc#HylW=s3S#ps(-jpqFiY?HB&ysmt3_l}_G;d{`IKt?-?KIS;J2r+Ezf zu5FjCG{=mzuJ;tATBtxpwB zxw_IulQkPQH)~6H+AdZif{wtpc{d096@=yrE4)hQ6sJF|npG66!kx8VzPMaf`dP(&b= zL{_staEaY4DI+`0fLNJ4PTpLosKW)>ygG1t@-JZlE^Q$_>W+I|*uB%;W;_n*=cJAf z&1!^x$xa!X`07!?l<|Rr5u4}TF<>CiE5K!w+&~7Y=e0?{?2aR&j`jciNU4;exiY1c zE;w0{GB%b`HGO^vWwF5>+H~XA1%LQfxC1LR^HBplALnIT@MUG0W~8|U*njPiLs+Ui zqX}+(v!rXYZY*83FVPcYq83EhIP%`lQ)@kY`kt5O@Elw6gyYrg7#=o&~R2b?Xuul0yZ*$XK&3GW8Kat zZLl!gt*sKplQf&#`K60r?b+;8yUrUV;$J7`-DISv!s)$pEyjzr#>e>UV5J4K(_uIi zw4x3-?dCr7aN2o(f~GmAWLECcKoO?0ABLj_&z&LuWlt($9Mp_hVb0SbmmZ zY7&vzo9sKW(DO!5aTD5Io?SE-e+IogW0PFvP44{kkFAg>F-G3>$+hwpakRy`_wO#Kl)zT2vcd+YW<8=H0XFR)J1WTGNfkD<= zLJE=yJs4-xPF9Y4=h_awTx^Ol+2>(B0*~&#&1IG-G2l8EnfYFW%G*M&<0`||7p%N% z^bi%56Z*X=8rA0tAf7r76$^qDxR6iV`7dEwa|6lGET#H$-wV6 z(y0=Xqj@p2CfN2P8RUIvxki$w`wQTpN3A_|E@$&gIY8N)2CweoiLWBPg& zne$F&hcVCU@cy+*S38zVBfGlPUgh1{J??OgXrCd6Cb;p;7^&8XkIAR}fhn)(J2mPu z*)ImW7M#Uut%R-Vu&ady)$?m9$*h(tF)n+a124~thZ;^}GJK+?_pZdyKTwjP1Mw}c zTGTRRRX5V-a;dFTkc*wC9vTzqn%moxE`n**sLORNO>I<`sNsfYVkk^Y&Mv0*y5L=9#NTs4hXhND z)XtGZvx;G9|C-Z<*lK@JW^gd2L<^gbiTlE_UKqEpAEe7vJU-X4_N~=TcZd&pm8|xs zJi%41g&%ekUBKep8qr3A&lNqbrBfU?Dwty)szxiv)TSkbvwwU~&gD3s93nYXC-c!g zx2+~%eK1Em<9_w-0;S&az*(_^1+K1qc~)yJadG>ZwoRBcW=(&}XSSq*QI-4Y)y7#q zc>Z>1-PUl$SInCtYaBDr6kqZf zi->v_b;Swcz%{XE~k**Zc+uPLTP@wa_OYYK|cQ|m8oFco~b za&5m(Lge8>-7rL*Y2UQzqf7!y4tfqcweuXd^LcT(k#T~Eu`&@g=JY6MRxNDU>n?NZ zlZCX9dmm(;LTC0&Fm=;u((I3E+G!c3JZ#9Vr(W|BGI_tLaQf&oljK^xQ{hw0gL8ej zx=36Q>Y_kU-m!Qz;F@^ymC<&)rU&Ha5APEolYMq&9i~>oZb}sy@?mtD%taB(<+v=7 zvE4^fi`|BgUeCH`n?2++-y8T?f%sIrc9Beq<~91vz6eIXrjk=aBu}lNs#nhhqfTn* zA+_|n&3Gx>YiHvxE~BDye{r|V`!z=TL(&4h!L^GF^d~1L?@g)tEqiHhR@f&kH@1p_iCR36=MV| zHzNywjdU+5WN@jE>^`(=&zUo~8)jS}zj>n^xXa*fxBU#O--6Q-bVbS==fy`!@QvB( zO3t7oStdR;^uxrqi#N%V*4>^r!}C|EYPPV3oPj;FmlNLA1e7eCb*y-TKAeWh z7|8Ug?UBLV)T@}`x2{{i*e>#(%!$Cn=$QGa1NVpeq5QqJaeWd@pWe|Q3F##DlBH?t z2*rC_^K%;rIL{BxYfpz+QX?hKTeA`imlF4cfag~yIJG7CqB8hC(1;u zk#||mFTnP~_C5CQalYNM-LXu>KrzueX}p;}X4{0Sf;!h9C%fp<9q4mBi&bUq$Vxs=u?f@a8>!QH!m&2n73u-^ z#M(RJY~#gQPrB|DtLD(_sq4Gaj}E?`Y1eB%dz8_v6Vo1Fwuq!!%E}mCo84=3p9oJV zL)i7bzBL@T89w-V!8y~&j8TWH?dW~F*s1n-9Pdta-aw+CO3dbqr(vwz^s#?wjohZ3 zHFv=XG)rvyFN{1l7LmYsh~lzlXVuD1%8(~?Yu5$adpABeq%uB2m&0_&G-9M8O@u>M zp_WZt^GXMd;Z8a(1M^s#I-`Vy+TX@V9qRG~M#y;g_N}2>A7ew0U2pXA(GbQifnu%X zaI}C=;{Fw2^Dl7yU9;%R&JDuBiO`sJigWN!-j1^OO+`jLYSyPc9>Dl!mWPLDZ*Zo= z3)F^m$nJ4Rt9GpCl^v^#}*B?7@x-$B=igt%$v?DiA??>L1GLi1_UDE4Qu$GEL$`KR-nGIW8aR zn(SPUsPMlK1#-u`dSlF8M_f&)c#6aD4c|A?I6_&C-QCnQ1>x(1RnWS*&m{xfsq(&M zwE{(MXr(TS4)t&hXS?%#eKI>Z*VC(`!Uc6>pNEC}Dkd@2*uYC&&Exv#+ZH80|ERV7 z0-*#{yVNBLB`QU`$%$g2zOU~NqiCHIH=`2HM&ugig)&T;iw(#>TAvZIF&_o??eL39V6fb{kj(VzRZ|DZg#L+(L_NbHheWN0e3WU z*yeRbjv2ag!Qj%G)6reRexpI?iH7ayb~4CXs4oL0eR%@>eGuA!c~Ius>TqPGh{S^V z$%SCKQL>DI5fPdE$WphHt`03&x4MhHt}fj{KDIS>!@3wN4JcOF(Zh1hk07}tR7jvR z{w`S?0Zq+xZN`x0RZcsy6OchuIcU~8-Mt#KUQ_F--LmEi>m;^9X$7gvA_C3O+d5Lo zRFqT=IkKa|sjSp4{-55r8zhc>T9t4lLbjvW>R^fm9a?m*j`fNR7sHwN(JTU!?`T)r zRH4nsSr=o9ZIirBY6T*&!$enx`*?K`0HX^Wq7CBoj zV21D+KI(&x1@<-`!hu~&%!Jio^*PsjdIOL0VMU|&_TU9(Qv2)TB!+{loQpKP3yYN~ zMK4z2rp0#Mg(l9;I1zvmDjalaenihbd^HnMHif;6;AEQd`B3p(b`jq63b&;+daSax z^kc^J__Ld#>24;TH!tX1;9M{yb>Hvnpo=Ysn>(l9V<2p!O`} z{`6<)O?3O!;SKVQTK4e)p?Y^VMHBX%(7y3RrEV#XT~~v;3bKujK)qF&D9=Zd3~Tx> zWN^{ilk%VBF*mZjR|?VGf`m+UK|%t0uPrKE7I0fv2i)q+_-q=Fd9W$&mzykgYGMk> z7!Ie1c;YQ2;Ckq|Tv4b8q3~8A4Q!a1%I0()FbSQK8A+L#*R>K;6;xa0^22A=2c^=M z^Ta{}9%3I}zkVuvesC5S>_px~+Q?`*@tJ3;_??vX5M70mL+z0G(Mp@C9$X5>S;}=( ze`+9jjaRW!jxx4y%kD$`pf_$)B6MJSzi4v1vu_d59vleB2avx;% zh!DBXPNlmBh1V(#-EOgR+MN|nf0=OeWu=hAfNG_S9gAB@UCT6D!8CPuT#EW*+!>eL z6vpPD@wh^#!~v4y=lU`C4F^e0Rzn0j!)*|u12#CoCh!6_K_AikDu1ljLA;KFJ7(y^ z0PWDEbdpsAs%CO1oSUQfT2_acPsk6hZ2R~Yhkz62kY1V7lh>8@~-M?RMeEie^h zgVXkwy*>vuDn+SGBnGLi8>~5ksSz#C@$E4QjerCnfHdMkX(KGGl zb#0d`DogXV>8(nIp&hZNh#;Jb4w!c;24mT;F*WT7O>+o0C`}i(6L{Yppe4 z0i@BWzeJ&^!cg*cgfbi4QBiAZP!Rh*=DjkgZ6$xwt65+O{2xk7vK6_(c6Pt({Xa0j9^Xw^vz%6Y`7|`Tm4sy(9o%S*Y>h zs?RO$_Hpt;^>QX+FGYGBOpD+HGMf82gj6MWMN^UNI(df1rHg%Bc@9kDJ7vH#FH6&{ z1e~Z0#rEgIvZu`D7gE=K-Cgz*lpT-fr1<3T72B4rifX25yGa}oPnNIYt`ZGBb~)Fi zk@n%r*DPtx%{9Vgd1DgD$|s&av9lKhzVCOH+0{|j$CdlU?Pd`pQ;(HEsSy7t+2fXr zP|y+c7(+)Z^j%-*01IxzQheWMJJ=nal>;_wv#@_>5`Cw$u&Ekij8M4Lu;m%ZT&J*Vmuq@?wC?;99hl% z>jKC>hHH~{n@K0Er2}*s&9hK!gW^+U@3(c}a1N2AojH`@1+1v74?=l#b;p6QhSv3N zEJnDISPdIzVw$>2m(aiFYF*S7lRTr7rCFI4O`v#Yd|ru(yt%CZ!sMparMK z$PiC(B*XRDp%SpFzVge*k3j}I4l~qbh+xB?YBzbV&Ie>+C(k8{)Ej97nlQ$AL}CQR z=D0`Ywp$IuyLuC_tFlaF%ZwKXwjw7~MM|Vvr>R%EDtXo;_iSZ(GVOxR;Y5MWs5^_z zF|g!T_c8TqVvaAymrCB{)l9W<*i;!B)^%%w}JX!~KV-ZYELCtT|F6D=+;LCKM}Dc6~zg|2-g`%oBy+7po1 zlj3GJ$tmWEX4_gx4CB=l^)+`qecpOIJ0v0#v}6@UrkHrJN8x6`nYBF=U&?Z7y3n}Q zH=&eGhRo{49|cLP&O4YWzMT@$S$GQt+X*3{N>sdx(fXi6IIBeHMvHyh8iK^BdA< z-EA;b44YcAO2jo7tu05(gW|PebyhSbP@U)OU#8O@3*25qWxD12u291?xY5z$khPw4 z?0hv}a)*z$;Kl;{+TWX74vlJ|)0{fx!jMO?rGy3qtn2W`_z`PFNqI$M1NtpWT)a82 z0?p;z=p8DKxi?%kK28&B)WyT0@neLxAmE+XS%^}c9dKF$v zT(jIOL%%+0yX{sNC@r0)YHFHVygi#9boT31))UI?cXEnlQvxo#=Flr08|rj)B9_F7 z-Cv#xbe0`9>idcsU|$P(X%*bE(%i+A(5rO!!zKHjQ`TR*O+_e=I(Q7jC;dKLxyn?~ z+52E7cn-(J@7A`i=eVh01hwJIaSK?T^k>q}b*<(4L?MAyORhC)*iF+>oSJg<+-23` zd<*BaDuGi}A%c=vX%aIFq|mM*tD!O?EnQF_*UMB}N;h*uFWxkzbYNHuFv4VhSk>ih z>lWe2+p-UCi(O5DwIL8eAVuT8uoRX9_6de6?n$B*23Gqa$cnln2U82p*D*Lm1KMMOs&8ph`k3wSPQ zQ!$mbc<9j7Mz<_yY|ko>fYK```$}kYr@P3S*Qeeo_4txLcle!AZloG;LZMh=PaTn$ z{81O^S(5qpSuiET2t;!iRJ$qIh{^eW%1L`Z`0!bm6(`{YopcNt#LnyTI2rAra&kfK znir<0L3{-1<1jlrvVNoSdcnitQt!;}s-j}8j-w0n=X(u(Ot0Cw5k1)3UnnzXEgNx^ zM^8tXm}-8ohGbJ_qir9q9^b4!9qSKkr#DD@95rhwdxK1IeP`z0P)3obTR`>ha3gv> zRka+UEQ+HSjk)F1TXm#doZyB`=o+l5a~nQBHKQnwOYcPlFkp)v6=J8did_?#TolBn z)1qk-Ju6o~&onlw!!I*zWvQnh0kzp!h;^dReVD_{qR+g9q}^0kp1+x|BRU*rnJlp_Mw_AXyR0i4ba4{C7SnAz;$?wVQ^EQp`ysGBKNt-eRVCc++3JzQ zBD$nr>zVnpKDaaYw4Uz|d7l)|6@%QrM{NApS(`PVrDq8O%Dyi~hNb&(pAzU|6RS3w z`eTNkEe8AN7x;Ri^|qd!@WqzU)9o(Kq!rY6iy64&KRl{1BbSzpspMO?QVTUPDam>g zK%MqpEef!`ErrLuj0*Ffd?s^T*~s(_v?Y07a?ES(!ly>v4+)eA91~i42J11FThMnh z;glX>F%3sYc#dLp>$8TpQZ(yFv=lw3QdyqV@)^vghZatW&GBKC{A)Ah_Xjwkp-#p< zv@?U9r5+EKmyiXMb`A3yWj$4@RnJ)=gK+phBkgJ&jPP{14R3+{FsefONhUR1Ax>&@ zWk3-7aQZy_uJcbVfY{55g76+XPPal}ufVgkJQ~LN&PKZc_AnuwI@xQxX3)8VRSbD` z)&u)2bwT_Qx=@kn-mNme1)GN`_xLqMUZO4NiWyS`7)xVABi7z@yWw*pj{@flS+oZHUcfBaPIi-7zLVy>8H|MQ)gYv zQVu7CM3U|#rcZ{RoMX#Oa&*Zz3VM4bRIASM!wrkdDa;dTbavX{n(ASL-fQQ%|HjZ2 z#9yTaxFB!VDGu+JVgrjZGBme~9!`IKei<9v%;f73mYd6iaYCf0za_x+z}Mf))me7Y zJxJxMFj=&O2;*Elp# zciKF?Ib?i$KID4D%1j`3R0iAorKzmbY$D%gfS0R7w46lG*Dq@pytQ z&t<7ON_O5e%ll-{@NR|A!c>Xz3m4_=>wGmIdY2I459tzMH!%DcY)BiK1%$fR*W+}G zck&hajgEhbD`x!OMSfI#2+{n-P-=tWC)H<*xSnQra`u5ahT@z4L9~sS1afkfpa}N^ z>^g28I~7Y`TVmev?E_y;{7`Irwrv#WzDqv*6p3Au3KJ#VB&eE;ev&I?K)q0Xi+tnA zk5T^7B}f9b$XCRLDfGa{QT1}C9}Y?wp1V89~ze>ze9+iqmNO1e<(8i{yU{_%lF@1 z`fj-YHN5Y>_`lSbF6K5S$Gg#OT!H-_BlgZ@dC6U%X9Qxki#!B~AQtBK?}77g^_Vj$ zh*o3%;NNoz5Bhf!h~&MKr(*x2eu#Q|Ds1w%KnQ}C3WxKMAd$Q6A_zK<;Xe zhe=1e$!uNg`l!+jT2$fW=dPIEzUOh!14qM%I7D0-F2wH-bMW%jTq!Clnh2pQ5Qx}! z=_l_h#0WgwZDDo38Ffs-+r98282Hvf1D{$)S$vGPcfQfOAPBmWcdjp<;Qsm(ISJyT z%)k>KYh%|mIl;7hz@q*!?JGA4k$D4q;TuB%Ed?+4BJuL0+wxj?WmZ!LSqCXsozK{a z;N8KS+ZNNCMPtcB&R*?zyCz$nOw4`eRC$y{-kKvk3&Tp5Dx9J1p0%P~YPu(e%~HY3 zzyx8#l>M*VUp_tCb*oDyiljn{KM$;jE*Sxp?3Fmyp9$sr8p;U^^w(GyNG|Wtt#r-8 zKuat}HRG7YG_{C>#wRx%^T2$^1JL_ULoN@wKh?VXoQmO}x&D@ywk3DPEs(k9|V zE>V$c{JfT}kr6M*(9yO~y7Mx*x)UyWX?hV6l$K33WlH*Y2$0u@oz`Il$np!EM<4tm z;d^WNe1#(W)8?0#i@s5=i2+ToF^@#kd}6bg|F3%w%FYiAQ*54+d*DryUBIfjMAn-et6ve{xLHnV=wRgr=An`c(GLi z5RN?)vhsUfVg4$?#fxlGAZTApc#DEEc5C;<=+>`qUq#SK0)cD@m@i8AznX?->+%_C zYDSRHs9?nUUt7jUa>sMXM?^p_Ph^)uuy=J&9%Q29l%l5*&2V5D{M@}qqL^g9)+UI2 z-71s=zIBW9fQ?77kV7^s=A!22MwjHZBG@lpd{fo~Vh@khL6PmGyD5RY&0zxM2ZgJ> z2e~R9UHw0ePyjc+fq@|Q>O6Q^KOx0m#kWfk(;~C9rV#_#wj=QxH5Vc^!qxK9{dF5UuRoEFVNsw(3j>zA7wzx&Is*FIf$x~>Fp+s z1F{^#kih38}SsBQaO(YXfv-!0>m$8H#dhYc9B;r4$r zsdV-`1A@wY6@e%=&9ZXiWE5L|YwH`9UrgzdA#*e3FCR4g|BML!@wbPhkW#2d7=QHS@X~M%Y0)@RPFbBg z+t+yVs%OuutEFF(e#aW4j;BV$>7h4+fVG?iU)L^B4s#kqp8LGwCyh%Kc=%KK_@h>1 zt|h)r8*7ngz^WH7B6lL;W_Au6{SB6{uExN*ZtF52QEcq&1|hkpy~dkzdX5^m%7NdW z|1JMM3cA8Wh#aCy3VW)jbuL<^rXv1yO)mWHeDxqcYKT8P2GQnhDkoLetaK^u7GvF- zjg#IIP%bKxJ2F^j5}|d%E&6vXmP8Y1Zz`xFXjQ{U%C4evZJClKuV}BIugJ43eYjig zu`#2&X_3dghaRq%1IWvoS<2v9Iw0{fGM7Nf4blJvtiE%Dr+KuCTI^t6SJtd-g!&an{{DHgvk*U z55!f0Ane^vE`IS2A2i<`y4W_uama@Fv~R(gBq8%!$*nCpjW$bwwC|Z6m?UHo&F7nA z*DKk2B(Fx~d?L|%wp+KutY@pEJfk+0J~fdio9`?74H5Dlv;z0e3%Fksv{DNyKbI}y zhD?p2h?P4*(?JIIu%eQl?iUc@I7NJcDcoa91Y5^z)#fg{;aw!~b)LrFHJB~g(+O|I z%npjN7M7N+0qwr36-9UO2MH|5O6U8yXlZfj*D3^AddUnCD7JHa+6uqsMiLf$B>1|+ zeJTdgjAQa;uVK8W?>av`Mn8@6bcp_>Bh*y35MOGC#3N{cg%gu|bB*;Js%k3?JJXX^ z7q_3pr~r{IeYfj$b)lvZyB-OoJmELO#SV}}ece1>CQdZ5bHl|te0H5`_?~=kS+yKB zuO8pfQAsYqV8#H7Rz~S=!;3sYbM$R63vaG)eF%qoA}2p2iIoZBdrk96Mw920x=3l+ z#+L5F2e1;U92H_^B_(D7uAK@+)nCgogcqJ@a?*cmWSW|$D!xw3)jAm*6&Y!vxLri& zA@QYootN&>O_SALg*zTYHG<2{wSXZVOF&sHnNwLTKjmDtZzVy7rRV+}aWy${yS9C+%K2`w3v!n zhCQ!qPxHo|gKNkbgHs9I*2a-_rE?+Wk}vV0hbAK3W!f%VOM=@5M^4SDtR3Cti=B%Z zj|>!%C;V+<4Uo~MqQdyK^_ZvjMOKUGz;_|M+WI*hF;DS9AIw*9p~uYvUEkp>buDf6 zStL)=g$Eu(^slM8we!PW6B#4p^@ZV#)14FXOgb$$%0IIB% z(^v4y)-50bKF!cowF>XzOD-Gts@+kfz3dSodHEIcW`X_D||Xa z*_`B?HAV`O67osOLfn|f%*2~o0p#VEFlXKoaSV%w_aFbo)R>Kl0A$3+l_y%0O?a_Z z(o=6|Y_l*st9kP3n|DU5WzO-7MFlKZYVTkQ?^58)1V;yI9f~%oKrD;U%OK3Rc3RoO(Ue7 z401Ax2hp4oVI}d)_x)t>ujVo*2Eps3fcEJf*MfDckIUp=>;gd#V2LQUlaHHNex+&| zDVMZN@v`iUM*F<+xK@5Pf#nZkHkLXhkX(T-(qBTZA8H>zz|>BfuC?$lKg_Xbf`ecz=v_%cwB}2)Hh^Qf8HHCe@LO&bK{}eeM@W1TiRj{QA99 zqRjX&&X)S7tJd8pnj1KSE)=PJeiDpfNJ7N1tf?GA_=^=T1>jK(qp#Qbw98(s)@S(n zIWrh3D94L^KQ)un8dzGCV7t=l@38dp+dQ-%a8n7Qd}lG;%N+~46OS2fl}x`D-hcJ$ zDx%EzK=@C`Fflcc(Sfee((>K)jH;`qUme2A#^oT3Lq~vF{6ZzrhY%E=fFylh z3Wn_B$^!cFs@+lOv3zkQ26w2V{Q$`0=L60@C~}0DOOCg<{;Q~j5 ztU8a?3V%U;FPg7|bCvhTBmoroHwWj%y=3|RNX1HlUPVaVZOt5>yif1TIOs*cTSehX z$X-8l5pJ<5Q=C_i<*Me#v6nsoS+BGGT^NHweTK_!VePPTKEs=3cFiqeV-X+6`^E(Y zOY7hLGkhT2bfw4kS<%;;+`!{b9s_LIaOK;7Bhi1*`ce&Qo{qXjqDs5Dc!Ex8l~UMi z@$ip*DD+~p-nM+v+;^Y=LnO#1u z#*?mhFD`Y`j3fE$V$clv(P^PMJmws2VdLX$?zfdhxJ%9nT(tO?T`Yp8*<1p_s_QOG zhXYPM+S)EsDuT;8JrcS4_*UeeWk*r*;qReA)Plh01jNB%2(rK^q;E{$qd1e~Q*EAy zXnvIJ%R0tb)4RP1PS7c@Zg^WX8>D62yL&?nh%8q|YRj6V)Mg}Pm{z~WuZ;1(1YNrR zXJ>EA5^)%36XCucf_5y&Z$sYTwryr}i}!$~Qxvr`#5ryVJx%s)m%>Q4c)wxt#`Pa( zMB~g8L|e7m@W-s<9~ ze|78eiDGUNAaxV6Qc9IgDOCcKzw;j8&@t-m0xiJdy~tqzED^f+7j-h>mR#KBPsa;< zPo>{^P<;Og2RI%YRdqBtJa52N)U361Ucz#UieEK2?ZUE>I*?*v#$89$F)=C77NwE(ZdM%=KL z`G$+XrEUjdQplV~7ar!HI*!mi!qIBd8fgNs)!Nx}ccQ3d8TTefiYyw%!#@tNDd)XU z=_T(s^XCS!wDr^czLJMN7l(K72fmq?m+IMO-r9YG^vr0yol(c4-sW@G;6YR2DG*%$ z4mISswNhox*$7#tSfYjV(B`0WyH5Knrj$k|9^dG)Kwr=us1oM**TwXIZTs4Tm z^ODx^spIW6=4ga6Pe|{}0pJ(8C;lnGzIo^7>3^FG3FLN)&(dX(5B?qRN5BAmQurDE zTs1%?>Sj^J>gsR`6soH{`ZlsO6?l>XP;34@Flb0q&k%8Z>LG>XBD+}d4)50i3WGUn z5W@amC{Qy$1SuL0C!m^5cL~IYve8TU^&kch|J#u9Z>DZ~+b9wBR-B9BTaSAS0<6>F z&A&GDzpwSf?f<)4-<|Nkw)H&({Qtxg7t&B{+}sA2?e+{>qSTP*l$BXZ%KkGEGs=t) zGn~YTAR`SA{51o&SwK)e4o@cHSoGbATJspa(_i8l3VTbqSfQtH(#dSHtY%QEjB}8+ z?1@`Pci3td@RS*i80On(EbcIay)}OkoH=#EVE$FsAsTe>!D{~Z7>Z;5 z)|)i!OO1D#?z@%`br0gvFEXqozo+@*Y6ssiRDha+<@C~#tdVs8W*X7%{imoFCq{~VX&9`{G3?_5CECgf&sc$eN@FA$6c)Gn$#ZFB{ zsCz546uFzTv$OY$eQkA^n)l}f2gK<$y}>OE8Po91mbaWZl|-#j*r$E3?a;35jhkm5ohL3C*%Ivt0j1;kR%Z8|`Ba4EvD|}pKr^hnngVMe{1sCVA z5%6fsgN>BD09f+idOZCEd)fvhjt~Y3QAJ(Fhd=q4o8IqQguC)TKEU7EfM%5%C|!*F z|$(HhXEuA%t);iyL!l6g08Wif1EbI&_>kry;Sermrm6l6bMo*J%Z z70UE29f}Y=W@Gm-SM;i-0R-J?SAYC<1LN=W$(!}86PRTe>(uZK=7rRbad5Lky7$W$ zsfF*O?IkQGtJ%9Ia{{_@w&k5~=gkzE*($uv{E&0&Tt|pTh zwLJIHBjTV?{rC;4plk_(d#kxjtnH#X*mIOUZWj``ExvpjyfQYN*R>y5*;UeZ$5`To z#$t#2%gh!iuCz5G4fIHx?<5WDxPD`1n0*n)EFWJRJB%nl4jB}KMAu2EM02(83Bf8P>b$7obQ`?)&~SLlFk3I}7_@mArQG&i z+eI%;(7qSPHDX~GJ*)aD;<574VNpPY-Rm$}aeseg8fmOpnGV=mO*3$UNbU(74}77= zmXtz7W+>ed&G|#L&Yv>el)(8vg6LrXWeFQv zg_0xp{2!Sg{2_ll_k(E{agX7S>qb7p4Xc|{4!ghA{|?jq*}=o8k)0|;9131Scw$%* z`0LjoK_$Z@94hh8p0Uqfb_hQP6p-#hoM*?M3IqjQ@cssS-U9Tjyv7YU5pVNSJV7dF z0ABQS{g-v7GKBKIN6=boL{J5b)HnWR3tZ{MhArXva}t~aq#fC)l7ATdQEmXArefm3 z=il4mJv^`!4D|Y8!z}21Npb=no%2U$=Q)6woigMkkW^?JHNGsc0RU0XK(u-f9FB&8 z3e4qa&yk4Z(Ih>yfS3^44&Pb|d;y*)1aDXbgLu94>KzZK|0&nlQUUiHb@GdVsRj4r z^@OMF!Ry!@f7B}jCZ$fCFaHn!U;>0Za&?2?B}A<7dc`ZRK_E)yr8nLO2T;T)V*e8F z{GqYI_iOju!Ak$WR2rs~NmPiS6sjL0A^bZ65l(Z8lNC4qi9+}%X4CnMfF~MERB?ok z{}k?^bif@`K|U3v2haY3JK?hT3ht6XW%%sGwdkwX9i8p{~$94C6T8PH) zvIXmG42n&#;K3)3e@K-AJXVjc!Y=Z+_%uf^WN0AM{qGYZ?@0I}M>ZG3VlD_pD7<^w zKScU1d7gR&R3f(%Utebcl;h+15ru!zDN*=>&zb6H!H43;^GaFx781iBGyJ;>^3%aL z5pIxzwwL%nZ7;C?;N4r*e%7Ri3-BV;G)gcU9MDQXR7if`-#z~CI>Q*;B(|);{u`5p zx1>H^k-q^zHBeQ34ORi8eM0<;WdXX>LIa1tE1P@rV3X*gekZZMFZJK~cIh!b7hb@( z;g0^(g!*9;e-|yIX8;C)-o&@>PyTno{<~n`i}wGQEbPdJ>&PVgN=ryG%ziY#D^qE| z2%c7U!vmjg_BipTt_95eKNuc3c8x1sQj z#~f@e(4GNBH}f?-+f0P_%`un%>?t6M&MGg0XkQDW(3WI03r zdqH+$`BttUq;&V-sjI7py*E{zDH;+peu!EwQsZ6o5Z*OGI#^G4BRWNb7s7kJw8>XC zy`5zci?UdsA)*yw;Y#HtFZZou%arT(kNVD+EhXT?QUnMAfJzPmganxE7MSc819_Ml znCMVF5yt`vGF>h_g8e}#P^d}0>6cXi0@26c1qn$&Yg%XkdhGvsa6GO1UDm)o#ptzU z7GYr-gja@K|E`l-^2fh{;D3ArqXUs@aU2tAn^zME@ka4pR@ak$PdK z)}Wwekt)HwwsibrL~Nq&=(6uC>%837L0*0xJ06-$?d3!*GxAte`YY$xpdi!&v03Y? z#5~}`7{Gr&#JK<{DS)g|hJUuPNzMnQvu6tM&o&7~z1_?_bcp0FAg0)?)+#PT{)qiz zAfXNbn-I>H`w${xxezWD(M*CQ*(QaYBQl2*oqELqa%gZ@7ApZ2t2D(QyAKXga-SWa&AO^| z5OL74y*VyI(?rN>(sIY3(5q5%*;GVqVk<3iU%`~J{UfsWTd4Y~h6j@Jc+~KR^IHuP zb5w24MIASALGm#eG_CXUhl2yms+)&3S_YLHGG@Xj$cqL+E1@m6SU2BJ-$W&y8X2U8 zX+Dm`G#&kz90smZ!|SQJklLDP$@zPo;T$zbLc@1)mn?UhhCx)2yfjURFZh}HfMv`@ zN8vHmA8!5rccn+;!aqun*M`3-Jx=PIDCSR%NrRi*&IyoaHy`_66piDfxu?Q`;jkSP zn=jf7*{C==qV@R6Tb*Vr;T*!jeJrCp_jKOx1#)i>sDw8rqoV~a<g-XXV zb8%5XA0PR)1UD#^0yXQFhwXPwGFyv=;?rfZq|O%MUx^tP>Jx1z#*Ahs5Ra{u3s$V( zN~orU7Oa(xS=3pmE;^U6bES+HJFk?QjQ7?ISn998=*sOz@Oz(JkMPbCu`%;J8%w`D zbdOV@-+6_X+9Mq=Hei-Se3gIVUv|z`{ucAW;*bdsDs@xhq(o6IV;}RRO9b*Ub#8OP zeuuKbI?OL?;{L>!$AbkH8vwE{(sG ziqG%{RE=gBYJ7LdwFB5+gou; zW0XC&Q(Z&@o}P`pX?zjT{d+k*EF2DYrw&WgBh*&URllk^R}b8h^RnAu+ogP0ye3evS~&#-P5OL(y}%a=&90bCp`%c+V=< zxt;+K#d%#nMdxZKS3oBf{oiX%VyAAt0L?;W4Q_UPrxI(Oe4(*7mjj&6Sg6hyEwRDV zAMdwYjlb3F^j7%l=C}gBy7?n^kMRlI$R%dtrAnOYF~V+^-^j133}5Z|an+v1=T+H+ zQx+TdH4;y;Ejr61Jysk?1~Nt(#w^|^ki&coL1%Tvv2Flne4~v7X_J(+7K|?)PeJh` za;qFXqM*QyS8Z)P_*+$Oil#!)X<`o_4ub1&?fR;Fo{5%1^qUQbK=zT zoC6RMEnp{vItg$b0%AA2?7m%T#>|MXFtx)Ej3|Eqg}=+bKNjXcB{j@;Ywf&q=jS^V z@bYSXIWYH2#s3Uaz9J=<5fsd4 z4#EoeKLiTw%t}PMW5u;4 zS?cB^`>6%^ajNfM<^k$EC6AvT9>J0O1jNMe+^kl&4xlpG4)$<~`gE7uMMA21jIYPa zT=Rjbq?v@ua|iE@_#5J@o&F<-ER8 zT!@b0z8w8T>t)|ikmz(UicS4kvl;U#P7tGq&$AuKc)AiI9X;6vxO5+b>!FGR*YUb7 zK%CGYoA@vQ(vAfsn#5J|B)Y7M z%BhPEp`36!qTvd&`Zp>Sf9mRYDz%e;^H*7t{+h^NoCW|MBd2kG$^zW_%t}^cslI|zf(c%F!Jj9aDRzP67R^BxR|WN9s7lDBW9XWZu^Zgmkdn9 zL;RSPiWM1Ey914G1PQCy=~5ggl9#`JTG;RXeVe&J27{Y(6vV>EIsxz4YB!Td*ggLbdv6_A<<{+w8i-&aAO^h!79tW#N$XZbV388i z2uO!?gMbQ%fP%oHm2RXPL{dGr5o;e2)DZ5ea?IC{oMOGzkALfxRt%uGv^$0 z#CMGGo%5NR<{Yu0Fx{}`ze9Qm2UWHY7gg4M_wl|1j#xM-M@Dzb55ppNJEG6dHTLGYK;tqGbhn5 zHUwyw{PZK>)^JXu*YmWv{MCFD&X$!+e%hK1Vxd~Kw3TKmIwje~Zn9yrm8X#Fos|4c z4zt`gats0C01f;;)#@o;XT}{Cf}hHZ(4?ih#ym3+JSy!X6swL!D40HRX=^sgeab2R zwlY_h#VoE&2Pd8OHeVKp`WiECx$MAs7a?YG_~DQRXh5;w#e<>^9Y=t^LPUkR`t-gw zXBjnj-S)NVZ%$HT?q1lu;;KgQ$q#}E(xA7l`^w5s`@?MKYGgdK+dk2@TvdqV53q>0_`o0N5Ji;OxHjn{SreI|Mym-s zNh#q8+e7LjDCQ!xGJC}T<*B6TybGj9PRLznHhDXj_foANKd=F${}8sZ{PatBa3&W< zGb32d*W+~6`G?m~uqo)Qb+U0@49{Bssh*|wqXpxgb1hjyHxIzQ*^sEwe=Q3vX+0Wb0I4nY8^DR4s=w$P}PeT^eyw=Fde`mRFrk`2%{}4uk#fG zlMMo$U|~`fw8KNye3a8Ex6oLqER4`eGwPyo6*7^0;mzZKc^WCn_x_Pu|4;`CV}0-b z1rY2P0Z{iEVUQ~|rd|FDz?Un8lo-w7p>dt*vhudaZuc3Dt8r_@qE!x;QHQR*lR=10 z;{=qLcD^%MgxumV1sd1@N|yaDLRu;%-gUyv@^5rJP{n*QEZVg#C54l(`V1Q`uPla^ zyA@Amyl+*RVmR%ozgTpr^No$)AiYBClHW4g|LId1z3h8U*(Xwx6B|QqJe!Mjsaoh6 zgjRUVG~ql4{D+!jDwGHWiE|!YLZs{*@jon{zb-uOy%>4o>=3>0(u0N1Eau<#7mgKN z9+;mmZ(|+xAZa{89JFtJ%?popIissJ)U5qgFW2hl;xjrB4%Y@e0Jd?HEw~dTl!R8GeNQVv6?^ zw-Vu&iC1T?;eAUd596;LbMBcIvJq6~_VpXS6pVItF&yKM-2@YqLjv@}v=_S{S3fYj z>(i5UUEYEB`?6=jQiDe|UKB%;YGPHTxR^>`j-qkmv=b)fPCZEi7(;Sz)%puH^&I_0 zR~*!0Eczzsx;^N%!F^nmWjO4k2buxoxhNY8Fd*2}w+{jQ}< zv!VZf=fWG7g%6l}V+AoiZAWNTlpD7q$JrJtxMf!7y0lMl-Mf7yoV3pjZp5uyHu1Kd z=i*;!sp2;8u}>LbDjp`UT{rQb%6G&m(<~0o=LVs0+tne>W6g_T2hz!l1OG6U`7x(_ z%XTf6u)eMKmujE)$jb%is&T4+c7(VW_jZ1=%QV8ErCraeE`<#etF*iN4?^HQ(hYK}XZ_ZV;;2&>uQKV{<+# zjZ03M%gEr!sd*xlhw*n#lea{jx`hlDLyzN<;x7c7Dtpwc=UYrP=@&oN8J+$3ENyGt zI3}dByoL1~T2FPek($P}nk`TO2X*eY_?i;2nYfao0cvBY`8GB^Csfm59bE@#?YGi2 znQdMvTe*1b-&|DnZVH6?7mNBdYGD!5Hr!;^T^Z%AR=Z+FzsbZbhPiZWpYt;;;s7FT zbDqO%_}|IViA-w)^(75iOtIS@KU!JeVr^sEmm*PbO`I5(UdwsHN#e~ zHCl3Z4owiDe?@DM(qihfR&BmQu0g;tdMv+W1N}fbKpXgg{bF5eQTCqRiqP0WyyZ*s4pueA+BI8YOO{I-<&V zi`%#~#Gw|uI{Ge0p_yu)fZVog3ReAnnw_5dmPYX%9XP6RzFz8E3*DfWt^|?nsQ%L`FRoC+7!gy6nfy|hEFst|Yhw#gu zcUTTOC*$56*^K|N-jQgvF=kAfbk#1pzgF>LYk@;+sx#pqUl74kt(ASWS#8T0Ezqux zQ#h-baEI71fov>Ss{S}vSLj1Jn9sfg@(3>QDrhCVPlKT@30RuW^=VpvEYhmO^MoB+j#(5<6s_fyxn2*B# zeeFP0;+QVeTo*Cc`9PGbA2*rPY{YFCH7rUpp0czV zH=^Qr$8IVk<(8^N>iHMVa)%$eqKcC!G0`J~bMa$pjgbQANsiPNG!cD_c$|O$^GrU; zr$q+HQW!q18yqQ8ysYT9v26d5PVtQYXz^l|=G>Jd!%8j4&K$S6X!wpEeMHMGA0L!k z1X1pZ96*ZiKVn60~>mdw|b&W-3cF`${c*|be zu(Le7LW`}mMa#zKrttn%7 z3o={ygz%CkLiaK>(dX%bkF!?j_8+0)i;6Jqf1t)Rh>ZL z`#4>Mrj{Pke+UQEXxAEwaAy(xd-8)n{6cOvVPt_Ey?Q)fjQ>+`T-Ytwlov)!isM z>}SQlu0$K=5EIN{7~Nwzm3TaaqhZ|GDH=%vkOZKH>Dz!kVLlR#V>ZXk2+^jchn21` z!UJ}}6_S`fyq1dCtFJh38U(1TJ$4ihsSQiKQFweS!b3H3@gihY;z*)$w_qR^4(DIvwl2X%#3??`=5ljT{7c7 z9KPqVq;|!!`=%H1hP#Tj^~=s|-TdLxo2FkEf?14Tlb4!3a_jood)q~5FSy`<$Zdv)+mnN@IQ(_+3+T~r{-Og)oZ-RNh`^Hml$ISWd(H{VG9h%oL z{INtBrMbx*!uC}iN`>bsqaeG+X-vsGh*3kq^DmC_E@I3aI_+ItoyG$;I& zs^h}j;9+Lpj<4QpRpPa%?~|%i+K>&CizpCc2xdF+mvaBfd>Q+`Qao7a>t1rkABU=$ zp%8&#=E9o+$GsbGcsD=GtWD*-A4?;09MRaAtDx&N>0&VNYB?BYd_M3X?0B=PgVDdC zvIKD?OZ7s;7F|)*X{S;*JFIdi*$gD6HLq2VP{7x%eAIl`c)uLUpam@pXVc!f?Hh?) zH1G?5`x7og{V$2=*y&s$#2tsG`{bY*ot; z(8`-vfi-gceEK0YaE;P@axGAH3k!!;&<*ccvI$7@b2TuyveAT3K7efk>#15$t z70|(!R+a43N8MYpsg&3?ep6?`2DyCI#_u}T!i+q4$xo`YDf&2z$7L9X_EJ7@mjtfo`O^O ziBH zV0D~riCXnuqk~W1_1>m#V|8y|V-be$p`~lDfUE{6eIOajudi3{@5NXa1!3mP zF05zou;;~{uL+3e)g4N>A(KDAr%O@qOi=bHZbvuac^B-F&0lP{W42kWpevY8KV#mO zJ(8sB_oTq`Q&hpy3*72L3Wt_!lP!i_KMA9Vrs`yqiL?cC!dOxEsT5B4TUw%W=A-BF z(N_-BVl<5^kL1k|9>^Gl2-66U&38o_sWc3QU0fgwWR!6hk zU5_cju`L=a7*7!1O{aSp? z`5tLm`F^I_7%V^5*H@42iTr~GA}98GFp}75oXOHQZ?4HaSBuJ(HWdH9MduJVy#`;X zRDEXgOJnZP@)Y|`vrm9#pRm?tq`^evR@~cj?$^k{;l4|^0b~BpE3rAAEvwW;s}bib z)Ei^Y$R<`Rx-%%G2Cykvx3KH1{6#&eQlihW2gak!?<%n0FQF5>K z5+Q74p(ZmOwmSwCuH09tOL;1AqdKw_UR=#$=Lo-zJ(?uW}!f?1jtACy0W z`kE)|Bv+}_wdsWmKb+fLAS9$d*9TFhT2a=NQA5R5eq}a$szCBR z_HTH>8%)Xjq3$JGOtf?PkN10D3*VNAb8nw7i&?9Z7x`M2%KDO%Oo@Gkf)(#jhhj@~ z1NMJawl?nt$Fp01?GLC7(SGrhazKRZ5m6 zVOJba?aI=7t;0(2tA8J3_EzhE7&t7Ya?cla%54whwD&zcy2k}LU~>rxNW(UGsJ_}| zA;EAygD{HaV9GByR5h~saML#9Zl`0+M<4rNq&r@9gD?l;-6L&5*ybe{wf;T~%kJEu z-~gb1(DtxwM4uhAP!Y|@dq@(@#JHtW6k&QdRJY!|m%Ye%@dyIUzYzV%*2q9sq~9Br zgqej1pT7U0V?0O(p)m(;-ob|$UR~XxxFNoI=l_m`^&g7}KV_ZmFZr6MF#wZ4>xz9= z5Xq5rAy*;1R94P^NqalzME>{_Y4lZTLHh=2d6ajV-cn_gfUsH6s%+ewU;OlGzD~3MmNc$<2s(hTdXVDTee(vMmeTL-&#+U^TKcGyWY_B)Km6BK zoR1)3Bk<;h1cso`H`BU0gLv99sh0cu>$Gg1qxK?ot6PyL(XGZWQ>buImW>c)RHPEy zPuR6`U0p3ZB_R9K6Uxi=Kc+<)h@@3xTc$>!Ud~mj&D|@WtlytSy)&t*l>0gx5Q=7e zoJN!?@2Tcn?Kik|7v^WI`IX0C**v*_;iGiNMt$}P#?2*uBKe%V^iO(H+HZxlD`O!| zv0a~O3LTSb+%)2wU4>P_V z>RA-5 zSwz#3Gp8?YISw2=UNXV%&!qZ-l*2&6n9Si@?>K3g^+Ju1wrWDTq8kCfGFMj1Xca}| z4ZL=?p=E8=(8%#=dvqF)rNhUF!?>vNg6A|M(UNUvlJ4CYb(y=!aCYp|F&820oM1Rs zhBTm%#>H;_umTM=+$wLdCsl_fSMN!yATiFxYm=Gdmij%}3%P@r02J7fo8;7unLYBJ z&W|AIYc%_%ui^xK6!>UQ=`0FIc z18O0fS{vQj_WPpHFPsDfzuk%3br~g)dAv*40AG_+N<%v{=DT@DIW2*yH=(i7H+uFu z?J=QBzX7|cQk8|9XORqw_raXR(08`82gfJgB_&6~(eZ zOLEa=x39Fby?;e@>Mn5AQytaBYu~VVx4wK}d*`9kl(v8QDEj%TOtspa2aif$X*v`MLa;S-yjcE^bvFm}N&vZQrus+XtaJ%e~9~8zqN8gqZn`$wU<$4=e+l*}J65EWwFEG06WTDfh5aF;;Y#VNy zaAK;&6Fao?PQ$k`IdawuNpV`D#B}67Nf{XzSJ)%U+m4ie_mO;hlS)7Gh$?;P<2ZBP zIEY=2c!_CHrrNgWnqZWG)2hmX+md9m40p;|le8^DQ|vt

KSi36==Qqt8?G{cbz7 zkRWXTS%0CPW+Xgb#Vu_!^pVh=*sE0|4Cw{6IGb0Jd|&ugBrT^B}GdO#m97xTH*rk5)S5Brgsgn;Ns6JGh5v z)aap8vv7>dON~Ifjv2$Vdh-R}3bn5(&DtbMQ+Qz&X65bZ8G^BOQ*h$s@arCxlQ2;L zt_(kbYdd7t2E49fR2VFO;P-s&V1SmAe1X_`Ox}^rjPU4z7Xzc^1%Z~a(^VG4^d$}J<{@-M)9uTgg% zs9FSQ(*4IlHIJS2y7m1WXtMni4>Ia9YWjdstX;J7vnKy*aZzJ5D}4QSUHhJSvbqRK za6)k`vj#CiM;^SUWb@WiHnj?AEe&R7Jjy^sS%66YH(5bScFu!!N>gb zg8y#1_1d8Ot*joogUcMld**u*I3)z1YHf)?-CxT6Hlf)ggxnt#Q26MKcb|lIQ6LBl zDQ>z!p-2$wSbL?t?G+EE0gZ)F?pUE>x)*!3rcfAPfSOcXZbZn=P5PA02);AGD=Rml z_}veItNX8vL~UH1?)s0rmmkpv%m1Ms&`|=pT0Vys_gzzh-2m_gN{sEc*kQMraW4uo zo4m|a6a&|09(I%2glPRfZyz+_!B~5J2mZ>Bt}#Q%VCEQ?fu&2T-22t~s_9|(^n-5x zTh@noUS!1)v_wv`-onP`f*ZPLyHq}Yv)=B0Z;Lt^T!XBykr@uk$PpC&Z<^v1ern^J zXbXjd?y8yRQ)$PV@_C<;+k4^dEi{wW-A9?h+8zep{n_5GLm3G=-`6)$m~w?2nHI6N zRvv!Y7+()O5iG(vUNy?yB4RXjxP_r z;h;t@w3r_IQCDuz!73}qfzKX>rdO`!-ouyA0ELu8Y~&b1Fa=!@b_@h(v8r_?448$o zlU1|X?<@m6gjyo*y#tPfa?sw*p+6H?CpfC+j{GP~@0bMm*uyJxV6mKZ2X}iaZ=)dU zQW;@GL_MdLG7e?UUv&2W{iU3P2c)%#x>UIc{P zPeSLmuaSfJLJo16+=S(D|5rL^Bwf#UM&|t!lWZOGG?V^B;ZZk=NWE6p zRA+6ubD=@_rHG#M5*EEHue);D1E4@P>)xJ0(z;(5&82H`|Q zZCr@((^8?|j&&j#_|obMP(a2;lH|MuP|Qm>OWb#jv_V73PP`1#KJBhCCQdB<+u zJ`dBx2BWAcj~6^J=7IyK+>T}H>>b_x_#szVtVzo}LW9RQq= zF91MGq1mhDzbyq_D>dQL20}oTbj@XuI{xSPv$?(AV{1XeE64l{x5Tp@UA^6E;{B*# z#4g0=fH5ju*}gp!*0~PFvUQR;Gi(y_;yWgJeYIce>i9G ze$whT%FtJBpWMN(>iT~v%+E#+4%Yv!3fyagvg8Ib&40WKnt(PU1fhu`P!boF6M)|9 z>u&*~-Krc7byoBB2AC!3F=Tz)TA1Nx3_8q;B{6Ggpxjn=M%5%}mvn@?9`QvqvDSVDORGNSbCAKkGZvB!cD zgWgy^8up--$o9+d>#>6S+CAn=ezsFX9P?>{Td6wclXAg4?2Z}0rHwJn?UnA^Vpfa` z=a7n916#T9VgX+ zl}n#v1?2fu9m|yL!j7V}L!DA`zGYMge|gbf9d=?O?yIZaI1m<_3)0`G7R%L_t_OtM zeHj~ct3Wxbn{2j~s%^4ApsG6{BF1}nCdO07_WR|h&1ox6gL1`e^6BUL63ZTzCR#6T zgw=;+?kC?`t%53v@224Gg&OJUFB*Eq75+?oG0VOJde%LzPmPK;taJ37+^+8h$KD9eViS zls0$G;8xr~6o-t{d_A<55(4DdXjJZ9FDC|gvu#=yBDKx%K%qp4_@CQCC zQWO`p7S*_QLTcoMVJjxbr~?g@Ci|2JT;k`dn$ICVT~_eS=K495K*dvw&cxQThH1V# znk~X9JJs&oLiYo~<-lswVaX@j3F2c&0zn$0v}4!|OMEexm$q2)4+5`c+*{H$YlA(6 zX3MVmo1kwR%0~MgX3%Nrn?;G%`c>_e9^1{b7h|7O+28^7m*wyYknwLxANE6|2+(@1 zP+*>=Ho3HmUw?GVHbHcW9dgpi&FhCevn7*GKfUGZbc6`1Syir|WFFl>!VTUFq8gd? z#=LCo0G=(i*?QTHnX7cfqALzVd~Wo;Z#;SgPJO0!5!!Ev%0=k9z#bz5ZssEmzi|WR z^nPG>3SEgi39)d~D*b0p;@H-sL#7bQoeijScj^S!$RM^p(f}~JLv7Hsxek`TZcJN7 zjkydzBuRUD76&9t-FaJ!$ykI{4wR4pWE8zZM#x1DBVphNIgqrysn1CuFEQ%IMO6YI z(U7Pdg7#GT)*_&!bA-DmuC^a)im;8mxXa4h;qb_@k9kGVpjqzYgPcZV_dwLn(u`o3 zd=tn^3keY}USJy%qr0HtqJ`c)cePzH{@?{0$$<A`uqW!DzlA9PudwI&J zmFc{9p@K6raxUl1QD~G5P9JhA?N+;rG?u&1q{5lBZ3(p^T%9UEIQ+CtTcNBus`5}x z>bqj-8IIe`)t-j^=XaBeFWMmTtK`t<8Q_D`%6@s3@337ief>pw*Ik5E3<%G7Y|9cU zzew;Nge{!h*}^X{;-S(C8xvP(=1h7rT$Ea2AZ+3b-OE5}-=Zv|HqP}vx|6TJLMHpQUH$KiEQUq)8j+pZc@4w0vh5CU z#b1iLw@N!#EWe^2F2wBc(`Iqh^Up(UIIk$+7i?S)SP}+S>01rD3{t<_Q84&o^q1VL#>UUpd~+469`Dq&C6LX2U~u`r zF2FYjUQ`@C6~?Aavz2$fAwo;1#bhKtaUcL_R9R}%K_oRyFayWQGjhn`ZSXS6a6Iaf zM($4%J94UYAw#%ICCt=mmq2xTg4axAh3aJ3N_Hemi4b2@x)wur;?l_q#q2xDx}NL# zxT$-ztRqkiCh$&y!qc9>=q2Vy?JkCYm-x6*(Ph~TT&hz$kCT{=Jt{LlT1a=MD6x%G zA$#PH6*EF5x?SgbDtTL?B0nSw0(2(DdC6nr7TM<*U^@J9!&|y#%c7>N<$>_dW&yog z1{s|edd|Dz{m~)VQOsz>z-O+3J?(6}>E0K2%o-$v69FE7Aa>j0etiW9!kPWb0%Wo7 zQ;>90*6iBp?l-}q{b7#{-XH|(ZdjVXy+#n0@~9p>iOC)K#KTKF%D?wb#La&{2gYl3 zWLw2{ZD`x^!XMBbrC6}DmhM7aMM65 z2lagM;&(mK!@s+3F~TVu~8J3g-dBhq>?Nn5);OlAT2~iat7koz&9??<_oufw=y=IWQEfW zS%)JNF-$c5Ld9Ap{_Oq*i)KzWH&2Q+kIQ=z$I`$U%*ESckXO9tjt3z(vM4b&nklOFa41KhAwHO0 z7ceQz%`lOZ%3uxiS@s!dV6C6u0AJ zDN%Smg|bUS^s-}^lJ^#sxNc+5HwsfYEr*lyxs2OWR+lr4eVuZeEEUHJ?rpS0$`1@W z5u)GCUo~U+%cVVrv(f22G6?f?XL&~$G1Tv>!UaQ9GFM4o1IZw+`c7{fXJBvg8FKO% zOGAa!rp?aFK4ZOtC~7ATVCc}5X)b4Lu<6(Fc%AJQTo4v%UUC_4j?qw5wrPNBCo^xg z()R0ecEVd3(x1x>0C7W>3)_+fEp-pM!1>Z|wxSL*V*UXE)G2sBPJ>!>GAU(A!`tVY zO*WX?Y_v&>QJ70A3aLhJ^KIXfu=U`8GODd)He!kOi*LyMOtc-89-x07W8_hilJ>YT1qOOklcOMAr=kWn-Ze9GQ!C&3C&MS?b2%0a;k4#q0hGmAp6 z*|d#6ys2EsczlLiFxf=RK3wQM_I+Z$JgZw})F$6m8H-dR9N2izNeMsoT~nS@TweCTl-OTR?OS-Jmr1Vgbp1wI&zlhv+l^JsFxpeU%`8eU zn4Gg#1CuQH)crqCI)YF$)DIz zB38!V-*z&^r?rSid-2_oC1o|_66?KyhtquF`@*uj7TP<$smvt1kJkHT)LXY)!L09| z?gioaa+$R==7Y^7@1qp5rN`JW#rob#ZIVqhZpjL-rHs9bdrFBpVe^>Zl4^knQ1ygD z%M}g9xW*n5^S_T)_?~NAEgxtQ;Y^i%-F3BK<*Au(=`aJQSUrX3Pmca>+CKOgljN<# zz$$o_*J{VS{m))U^dB_J4-*TJMUxS z#R?4*GSfzZZAP?asgb_ANWU4HfMP26s@A&59rGbkywMkz-#1&-hyi$Yn8@D+n@D0| zkDOMYI*lCp+i?+xAS=&HE8>&!z_U%+%pdF~Y#IMPRlOC;dOxv_*fuJ>XwB5HA;w|0 zzK(=XfeG!%eP6Y*&&i+5u<=$}>0El<)veCK=>l_|B>a(x$jRIegWB|>chqPfmbRTd z6R{HscSyi~vwxujd5ITe<;{kZWISW_RTv1c_+~!(QaA+%uHOp9zQ52$3=HwekHmXU z1mH{~faqvr|GF1{UFsoILMH#Fj9QW8I&*aZ>&Ra(mWxk9h73a->W<6tf`CEtd%$>u zwB0AE{`_G}|Lt5aRH?08e-E2DD0v3}gE`$hMhV1#DeT{|9VC#e0L~2Z5#zlVd<%f( z@!x?ZP^lIG8Md1eZhPf_DI&j za%pmaQ{p`4)YKg$drN1L~f+tNE_T!u=l{HG#kpt_nkXy1yh0E-*Pr7Q8HLp>O6i zpcDqV?*%wP^BcAxG)6EA2?MZR4eRQWIUo3!bH+jS>Hrx=*US+cz|QHvgPrm&L@{GF zbkGe72LtUhR~6lf4G97r=f*{S+!K;5Y9q+*G__t@iz?2i}wT`Z*pn8j<-bNph*QQ7+q zn;By3+z8QnSD>l5vG7#pw((~6-52K*dU|EDPi&5c#mh2!$B(NfHpf!|Nu%rW1TccG z+QMY_%m0*CzU6We@?$h<(N~z`XRXl|5h?X~swTBIaDzF0ZCn6)UY`Pm{?$%8s~+}S ziwfQnl4dBVYtB{s19d5d`DW-6lBauUfX13xbJI-I;979_c9qR@cgDkZrtrDSv z&=aeHU@=gMD!+uTg0A-8BbFl#+}mS70Y~Y=MuO!qw=Wth*kQ?MQ%73+TR-+7a}q-Y z+VOj$`9s1tQqGg@aLjn!#MyvU|obp7Kr4obJd z^yE)Thb}Ibi?AqW=|{WT2ttu)vn`9k7y@6R*5 zOHG4x)8nBc#;r&3e5By`VOu^^o41Yh65o|SRtjI7SrWxXol)Yub@U8VONv~0(rwmA z{I;Gr&biY%pFUj2^0T-3m@oDNwL2FfR170v=WX8t6~yF+WaK~Hm!zQJ+wJBmcGmY9 zZPik@Q`lO@g9fRTyZKU83#CdUVJ7Cn%$#47>6>3G>nodl)veA(`gt6EbI~d`6Sm0e zhJIdMFS5Gar@ySOl9yXvgCgwJ2nA`r79O^eeH2%YaTIiwQ!P~CW+v5xtMh1FRNDFo z@A14rltrJ$VKS31Mnoj zJ$KkID06shzAP%sW@OQJBg{vpRK;Vec=J!gPThgQMnY#2gWC#eMy~R|i5q!@(4Vc)v4vDRrS4h3r=G~%i40o4T_64sm zh8+@-4K_Zq)9`u6^p=0~U8$hby*#Yt5re!%s!wurUwIotea`D;r(|6p58i6CoH5t! z&2ne&mFb+luql@x%NsVE>mE?=*ExM!&V->%b873n!j&7Hvmf%|chMqhdhd_608bA7wbEYj(=50d`pwERsfNh|-#y;P5gzo+GN=`l?x65Tx97;@@On?8TI8t;4dyOr5^v#`I)5-ZRENbjM60Et{<2MB{miCO4v>FlaB4d1uy z)e&Daap|=8g|u5N_}u@HnAh(m<`IrFVMc{p=Ze=m?cfp*Rc@m~f!Rq;<=nAQo^Ul^ zCLgZ0z3v_yjl|nd&H-p{U-P%)d$1gTzJy4j{QeyRFM#4Bx4_?#@bdg- z{Aqgv8Tq6QEK|Y+w{W%Ew+J}0#bZH*UnagrxVm2Z^n2HNM&ioCYmPb-KW+K zVjj{qgeFUm9h`vlw)-2zhO+~sa2ic7F!2Ox6v_g60Z&tWdy6y|IMUgt?%GK!ENa}8 zA;PzqRE@bsc4ez`Jcr?9W6u@uIVdWclp|#4pVJ*@RoqV-%#lNxQbMO4@mqXH@!2{q z#a4zyZw_RBMd7n``O4jFoNp^Fwr$P z5Xxd`6~LgSzqM0ZslHE|Np5`ClO{XyRCl1*ohlXY6Rp*0-G)GXLk9)_V53{4)#Y<* zXYLH#QJ;ir`Ac^?!=+>?NujZt!quweZOgaO=I zS&S+UbRXa@&|l&DcK5+Nr4M=!97F%v)1jA%oQ$5eo@+H5g`}zYcYOeJ*glX=ZcJWQ zK0S;NY~U;KvYp);IMR*7iF*4?NK!Ncvl@_ox~np~!eojq$@z! z|216!m&60N4Geyby>DHO<#(*i(eCQ+3r9+ubO)g7Mv4bqrW^w7W{Oo8rRox-pETYDco#7UUJpLujjaFgzj z5qADI2z&`&MW?!-ibC6st9=LlK<+Ska3}A_*=xi@?a-p&lpl5OX6UcZ3;ADvLLkKb z8v6k5*!|H>+4zJ2(5LN~8VBTn-*!fpt8N3(slc`)KFw0aPNE8AuaM^>bI%qHJD+AzZ;X}$2p zbETPPpVH54Q4|fePTY!clsls?Ntl#nz>Sk@{-JWQiUr(CQp~(!Xqb7Q*041ptj+B5 zUT5?j5-I9c9w1jb!+QfRF|U?rn@>{_ns?*(v^;RVLT35aNUtX&sEMqYZQ;(sxvNry zlmM1WE85*_dCCwjW=TP|jN?_Z=( zinEQO(Hc?TOGY0l!gfq;d>z{!b@tj9mzpZ-KJTVfQITwe25GpW7Co}StKgA4+HJl> zXuHBcJsQV(blz<;SGYv@UX3V+P;Uz~8uhv}{aCeoBL|&~*9sYlKgY%8^c0B9E+bcm zBKLej^7(3YK{#8t%8SkZfiZIOePeTrT@DazT0{P9tXnf&u)V;)BCb%P9wSDZq;!#c;+$TFOA2d5L$hF(n-ek-LwH$mR=(!S!uD@RzSp4ou1kN#Md zYGRkSB5fPZ6z|L(GBhLK$W7j@*%0BP4XqFcuF=LhG2zWpkBIUlk>2HL+Ui_>8b?OB zv={DWxto3$J+QU#1bUl(w>;!FUT%_dA$>8px7Z5^o`SU@?2Sg;RxNLve1uDgFPPN5 zmg`AaIU1<5X2>p5$20Lz*#d4VHt&2Txmr{01Rpi@Tb~o>Ta$&;=QpQ|_!;u2J=P|z z`tKp9nb#Efmiy#{v4YDt+S24A4Ra>Mb0^$OwQ6&H7tB35^e9g84B z@yg)St`^{uXF;Dl(Qy2D_FA2?h0<0{_JbKIvNkh*Hsi+IcziQbSFRS#w)eHf-#gf2 zU+*Ve?q18CD<>n<65J6h{abDm@#_hb+BQ&;|Q-~Fl94WWc?sSAM#-rev1JVmVT_xh~qIra(CoXcJ5 z`C+f~*>1hmH&8Jind2=jaddz~&vVvWO)Co<4*Y#fBQrJ|o6CIl;~S1v<19P{mYr9; zN%O2G7adL8=^6@J5>3XwIn$h4ajd^Slx<%h5MoHmASX|E>-czXJ15(SMP+o7FlEL@ zYlwfn!@Nhw<%PtM8Y#@Nr&pLiQognuN$DTvp7OMt+ENT*js5mS%~#nptg_ zVX5*49={}?URroqq(k9?`s#w!DbJwNv$+y#v;IuE?tFz-9zy|&jIl~HAs-^9*B9i| z8|4&EMT{Gja6584ygS@B=JeS1h(RW)fv#Dq-w?48D0DrimzYo3|r%*&4bME5tNPK?Vr^1n|B?F-;N;eE%OOkz>Q zW;KYjL8C89x3rXRMi=?%R5OxNeYmF_C{OS% zPUQ6$HDr4`PrO}j*dNtD;pg`9gqPkWYq;(A{!~$8%JUv7xo{4yl1!n(yUo6#G{@14 z9VsHDW;%TeZSNaDB?NbQZUvEqEuPJhIQu-U)@H_Er+76ny~&YE*xh7d&`)p4Z)v%; zYr5Pk&?oH>s)NoqjGQwg^Zfhhi_+CwNnFB45sq3}E30jK{Pm$+?48AZ@_tdX@x5(I zYQ8<>gkr=>2Led*r4t8ElL$IGHocgu`P`o|n)~wm$m=d9@1el0KS!72Rek+LoPx1Y z8ucZ~SeKVnoQw=PKacu3$}GOg9~jGgD)NNGYgp=Bs7?PX0v8!7Zqt%(^$PJ<=apTy zMRamG@4x{+ANp?_ScsgyYz+Oe;9MOoq8)I;S4r%S*Tnhi&zj|5_+nVTQuy$btq5=j z`m>S+qR*aFG~7(_A;Yj9r=DFExe~YPCU3~IHkbZBvW7jx>4JHCr2bYTz6so0nB7)a z^d&Lx+eeZCXGK)t7wSUg;5r&^{(>c5+t+nIZWInzY>!_}KYf}l*R48fPQ`~}p7fO4 zLLdF=bXR7_2|r`wqI)SdRJ7(FKHwFI*YOX1pEVA*^gFRa8ti@8I84eVh6WezDLr?_ zxO78oeawYWFED;sk+<*|nz(i-%JbFgZ{##|MYMiM2`OF1IR}tFk&j%OXkGHAP ztAugzTW*cU-}QY<%`N=|Z%nz+(oDp3?4VyoZ9=}r^CU{OF?p(**%z@?fiDcQo8F$a z{2YE+tyEc$Da>i~8OC}hd!{Oc>Di&9r|mDmbzduI2Hc8$6fQ7Vf7hvZ^;V61OZ`6R zm~QYma{?aS6H+TnP|IQh%%D`r{`cZOJ%z?3YZINEt@240-HuBU=f%6^gWqaKxQbsC zabXN+jV0g?`{)LxSuZoy5Q?VMe1<8zFViuF7plrl-bc>09zA>~ zVzTsDChaxEW+IgLJ6CHjbp&xD_GAAP3l?-PbBd0{y)IAFz#yyau!wNti#5M)hrT}b zFYYY;vmR9~>}MTTgU&=&l~Cz93mKUg^u5*?C=jVE=U~4Zx46mYD&~E>O)=A;U%_j_ z_Cu|7$kaV43d|)Vv6d?Vv1Td5FUiO{IwZZijj03Q804-!&(K62^p7>OFwUHHbpJj7Wd+aBdZRX6o=>}FKOA(X%B!XIX;VH6J78<#p$N@X=U zeD<7%9f%;q30rGNAO}Gqs0*1aJGL;U1-0&F4pINt-tNkMbn@o;<2{k((|euw?>_0K znsa0AZL^Ej+x0#k>eipG|7vGr#?{10r|rz%-?IO9H~ILD)z7~f=Kqs^$};aw%+3#= zq#ptA6pnO|{bD33JPT)Ez#MQdk@nlJ3lAU0?NLdYRhDIWuN)XY zyZ?QdE>Uu(MWW8*{fiBezbk%BR6o6GWsF#G>5M07TO>VtB?}XBKAqi`sz2pU_mY=# zYW4qKCtu7zabwf>6aT8qyWQu_khS0c;(F@OtF6mrLswN6HlCdO-A-cvoWkli_g7bM ziOoM-@hblO@kaMKQ9br27q7HuuaC-WgcQBTz)&<={2h4ifPlh>N}&@6QocO2*L&!0 zY7`-1{P5C=T8C8=<5z7qUQ)Ze^z_C<&;9o5#nnGP`>9Az^ki`SEWP@lN54cGrrzRx z>RqdM|0HkXX6MI8jF(MQRkPf5`_adGyWnXJJ@)S&2(2l(^K(M$H{QrgmZsl*iXLAt zd>$jW`Q4uV--=#-u&B(qdHV4e&+wyLvh&vNy_UazUEz_2xThNTpFR6@hg<(>-*1`7 zL#?l7+~bk`W>HY`=hL6Zr)xvJ8$a#1Z6V7QOb>pvPxGqEq8{bP5zcE ze(SJ`|Iw3qeZ|LR#dhv|xAC#s^vShFPdJ@+mV1<~3u*NH_pb2q@+o|iWGc_RcdV^; zYk$X_wfxDG|MA6--=8u(cJ)N6#bdW``qKNJMcj!g;-9+M&uaagGr?z5+jIAey91BY zF}r9AuD2K(Q}q=LBEOwKzymIyi}fbN3QAh7x%O#w>ih0x%I2V4lo6n=U=Sz-DSwgV8at&BYM_cu#4aQ*a)Ou!Dv>}{IZ#U}6O*tiG`~WHfN}ye zRv@W?D0WC@lvUGeL9zyH%BT}S!7~~PqoFWjGiBp{#>)3!6cc+cykGzVPgg&ebxsLQ E0AAbOegFUf literal 0 HcmV?d00001 diff --git a/assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/3_linear_model_best_feature_combinations.png b/assets/media/blog-images/2024-12-xx-optimizing-hybrid-search/3_linear_model_best_feature_combinations.png new file mode 100644 index 0000000000000000000000000000000000000000..04056046ae84831cb38b908a5668cb4dd27e524a GIT binary patch literal 53216 zcmeFZby$?!+cvB;B7&kK9l{{eB_$!!44|OWr3fklGQ`k0Dj*_VgQQ4^lynY*f%MQI zE!`juLw#%9Vr+e$_xt00-|soTN$yZKhJO0cA?8CW@|UhT8_x}pR;YIl)hrFgJs^*s z6u-+${^IphRRx-7!YZ`2&}3!hlh4l4h6~qIsz%peQB@6rM5xM7xG*lDnpgVvww)H7 zeH@!nRtYOTIo&I+du49(`IznE^-{Q3bFWoRGWd^#4v&QS;i2EZ$i9(1iXeS;_%H#b z?4jSjD09i0zJL0!U*djjBc(+0v;F(y;4gSopZ@iB+>gPiLqgCqEPokV0zBbE@cVpl zPk?dH{AJucYEvK5vjo3Q^4rIhIlvRae|drgdItY52bF z69kawf4*Io+3m0el3(sm(I6b~2p~sE|D5T=swV>fRtYc-bH1vhREi{jiY8bCJi&ta zbEZHgi2gF}93j%|J;|j%H}C)7N+i$5&aQBb{vx-Xon5FdGt*->WkgQpuHKFPTn(GM zS|%Fj4f$VpF?lRU%=PALE|iV;CidpqJCIF@m*3vz6?d8|3Fp;=9vN#cUvcW%Ny&rn zUn+f7-Pv<`+*vD>UfTWIGW$ohN_Wyy<0|F7SV`C_Qt0%NleHX6c6L3zGoH368Iw2H zdHKj^;%yz9U>?D`n~#)&7{k;J;HozfJ6cK3$I)wrUwK;CziY(Tx3V9JA=)zf-1rI6 z=QKAwWNj*nG*VM~-`9J!J-D_Q&QvTvOK!%1Il3n*EED4SD`Oe%m&Z^lOz4RW)Sh zHEVRmwHsxC(Qd0v-U zXB%#{eQCKHtDfhMQR}J9CgKV@>Y=cJex-8d7VBrxmGR{S^By?5mO<2nXIwQ(*_x3v z!9Ln$l}Y+iYV)3P8_gP}6#;_Ms7dP3roZsbf zn@E-q>AVXhhI{eFP5K7WN05eDG!U;;o@6Ml>|>uyU{TL0H_R^0D>?~nWI9hge_$(! zhXkzB{L#z9tTe*Q;(?!>V+u-X5=4vr>V<^5`!* z$VYl^hu!ZfQmc>TQRk@%ju*rzGG{9=3W#g5?{t#3IvifwP~ zo!?k>)@NLBi0L25yQ;w`p%gSuw<%)doE~fcsV<57Mn-pLl#4&w&UiNBI4XHT!E$nH z7AvWnh^Wgug|4O3o~SEayQj7?oAVX^IoL~jA%MbsZdNGvtIlOpxPN$aX`|`jiwh=T zUil7{X{E7_`XZK5nHaPwUf6@q%*iS{46^gVb~3BCU`0v{U3WvSY@bw#4rZlrR#e4T z`fOZcvSQ~EEi}eOt58e42L(rS?|T+AdzsmR1+f{Lo-@t?8;^+(OT zY_G5_^?EpUG@Qbii8bxoKA+){P^pH2%jCdDHG|oO5}MO(Hl+fod6QRfqK*Bj@>7k^ z^r~mS30uE$ByRZSQ16Cw;_c6PVVRg&S$rzA_vX;C)C49yLI|S5lRt-{u@X_?q_Ic^ zHNA;7eq_B!1bNq3Pqr@7<~C97!l;_CI4dHzdZF3iMMqer(PH?{w2jI3%3?scPuAgOhc+PbcJV`fhRkpq(s-Uo5^!a z+t_{O@|M*A8dGdZYaDCu>R6gFUU--H&_K7%Kmp=fwyvTy);v9P%j&V=jqUrTD3=f1 zN=ZbJw9eOa{Wk?|mtcCr^Q9eq>b+}cAYQhg=b)Fp>}M*DBK$B_QcbI$_)fRwqY?wra(F z)(bzYc*pH$C$Kx-ONbb@*YI+h`g)Z{)Gn2Kb3}47Uu`G`^QpZ#`}1e2!@0H|no^~C z6#}=j?G3^)EoG>!&bI6|^7QSQJy&COtSMq=y~Z^Sl}+9D^%+^Lr+MMvbIgwGofffl zJ=Dz?wCIg@a;;T5w%VMoZGJsx&^vp@7iG+RR%kMWoC<96sXci-$_H5@xWen!cewPE zugvg>W2N)m_)& z(;c|wi$WWhQ^=k5Ng0fl&Uur1m!O5ychJNaSIX?|-i#)#m!{RB4V=WRa;MPL8Vuu& zP7i7xE>;(2M+ZwkSqN#nAE6GNz_QP z8a})B(O=MVxO=%%X*AZ`bD)oYb=T^_%UklIQmL3o(t>8a0Ba*(#b#7q4$w52W|1c?549iT&zBQPW6=YSw>ngsiH8JUA^eCm7E+7fmF%yR&#W5Q@2(p* zUxp(-U(JC!zUP3q-dohypy5R-uf#D}_X)9)UoW~m? z9o}Cdhjy53XYO{-?@7`&L`21NOv_9Po0jFbhiQ1N6>3VRJfnIrv%NZBZeUWJfPKNC z!e4gaeY>#vBsn#D+bm$kC9N2JoBGU*b9=XO;g`aWjrqM|v)x{` zFe_gBvaeV-`?2UKNW+t>9`fA<&SvkAx-2zOnD=g8()bcKu%(mqSw+M2M6H`Zmfj6~ zk>Dye4zHX|$>z19>nIP$cO4;jyHL@BR=G!Aoo>Du(p+%fV3@>8i#T`n`QvRFAtU^= zHCI1%xM)yjp#AZBc6KWrg3>4L2mG5u3TH2fkKXX|MeWag>F^%BEKoZayRj=}4dW$P z9_%6hUW&ZulQkXT;BU1pBibnK-h$|e?n0dAK1OShB$++6d8~xu#PYn(4oea`x?045 z5^FJ#UcK3Xs%`gY-i=;leTALZ%~;28=a!vvJD&Ek#!<&;&XAcw%E_=kYRmR`&FY5_ z;ys6rCJpAQ7iBp3P+Mj%s^rn><$Hm?ab&_n(We)9E|xt^ z8sy53!Mtd(9d6Hz4&6{kx(%n70THj z?LAcLW;>(Lo1j4xeUJ7Iy0GY8)^Jrn+$T+X$$WNwV0n0lFD_BF_=;^^>*#j2)Nq%J zX-<*W%9!fmJ|c<1yqGsk_f3;LI=|#L@dU*7oZnt&3V$}PM%X{uD7{?3TSz-NYxr>6 zK=*ndu~6!Oy?XOX8df^EY|opHj8P z8700wL(8yUYWq*+E~ZL>tK<+i7{0+vE2}|z*;?yTj`8rnF!<( zZCTjcBwMbV9I7w2H?iY&ZYPHH#Cg4|pl*M0dyT7~9O}?N-EQJuY!GAYI1*V;zlRDP z0NIUhqDMDjRn!uiQq zbWYGr7%lS*cU?E6Nn^$B;z|4=dt;e>0~xVe@yW?b$^PhRa~SWF{bm{C16%{|I5Yne z6>_TjU6wjsvi?ltlhYgPaGQYX?s?wU>TAt+9|WyfXM57opWV;06I=+0){tGozE&C= zS<%##^6`9}xnXA8m)o}m*{qu!RHb4q@j5iabZeuaQaV^dVtbPa;wtcx`joZ)1z4q{ zQ$pj?oSJqU0YrjJ?lM`NXHI*Mhp1@d1IyzjWTBqYx1NV{duq*OPg8sBE_D~Sm3?5I zWSz^kfv1S4VkBC-4aL1GsDuelZlHXrI?fTrOzVFCavMSe19<%yLe?SD; z(sFC+>_$Thbv!3x-GJ{-ryM%h-SYlEcfwKg6G-!-w;7m5KfeI-&s~MfqJuC&{w34{ zCli?~-I`=vdqt+73f!_&WSR#!dZ)6KU6*ET7BU%zY^(D1Kge6c zrnfF6?LUTH{-74?NR#)%OmJn5BgJBkN~qK!M%W&)QX1SNIv-Os7mVKTnR_B=)n)U5 zcW2dJPKM8c48m{6ccs0#-|jKx#_89Yr&T*Cr$^`*!`s;X>*tDn#d8(|-G=n+_wG&X z89p9h#5blSCpNf{zSvmVswY-fn}hk>Rt0Sy=X&92XS3omyG5m0b%dQ?Wj{Q&aZpX8 zlY4g?jvv;Du_sTqwQ_O{+&8Y9uKKup6h35d4o%ixUXt%uxwomnX?RXxt)}dCj2^vZAh(a!UMz=XWHC()ci5 z7&AVV2XwA8%T{7Oftfd&yZMZTXBX;%Drz>W2Je96*F|@D>-yEX|J~n z6>D4W(B3C6Os~v@GWv6e+VX}Hs(P}*YC#Q!su^;joI*R3du}|&DCyb1^}Ka{H%KO`Mk!WzTd|qP zh+_Ip*<{0YN6bq3ouchB2`UB4_1D%tg4N?ispN5Q@2-B~g;_McfvL|Lx0M>%yS!D` zo^*t<^0$sAdwt8~GO^|C9Q0@%pu~0D+$5}59Ik}-1Ki!JM&=Oe;SRBMJp;8G(5Cl<;`#E&EID6Ov3 z#EmSJpT2~x`si^jW096%A?A$0f>gBF?QAG*_fT|wW^ucDk@G$9cd1k&lCnxAix|(* z?4gsy5+BSL_a5ghBs9?+b6lV9%tw(l*tF_MAteh<62kMBCq)FmM*C@e9nKk|Ri2(d ziuJ1Ws@?W%X9}!hJ89rG%F~Zf;rGDI+-gRZ=t)>*P9??lSR(@Ol9%O4boY}n!BZb^ zyOy1^imLgf%#To6BsNg%Gsq#k=i?_0yTSKnpvrtBepjpbu4Nh>ZQxW=LB#p+NYwT* zWsqh{?DK?I$zQ%<^!78kWdBCGnL!+#WXj+@P?;vxv?N~Y)@tp0)~_yec8GCXcZRh@ zuk(tG<_6QnvHoKnZ*ObX$ad$yw%4|tbjl%eG;M$5g6sBc45Za`Zib5&I$+%$?*W_B4P99*9!gkj zd~<}2->y5a?*4r2#5|+@Y3IGWt+SX#N>P3Rfd`Lo3@H`F#FCY+STg4vfT%qHq&@@W zy14Eg!1&I|EluTh?zL;4aMnQB;UX-=m0n*wf zDpUhCwI&HnvfKX4=Z>^_0NE^ZhrcG{o{{sfGyoro!wfAgq_$1RPxQni5JZrd@t^ka zs5ag!@L^W(%9Jr)X5L+&4;t)!MLpSa~bEVYE_a_tSuj z*EFMjY-dtVbQ3>FVS741$F+`1W-#yWxvD8>!2qbVl_5m;ZV1^`_~bnevZT0ks*KL{ zB~#;l?^Ws{n=7M(`jeF#I81B}zFH^Dzhty8ztMc3oC+S`eQKp9&BfgcYe4as9NND( zos?G=?>BhkRk12cFez%NMtWA_%!2h3+Om?#Fi8Vh>P8PsY})cbg^pH23DsSMikN4P zIXqrQU`9Q9=iU3GR=xyONP^zUIgCBr35h9*y|Cxk!O^%?)GDylzj^mCA*5LoxzK-l zmM(ibH#9VKybZ$8-me--MTFhHK^^VksVsW;is<~S=F8&La=R!c$?w2ptvKibe5jIF z8#Z<*8FJo=RN_0~;8eFDwH;eM*+$GI!LEyzYMwTL<M^9i?I?pncGbwvh}JD67{0#=ruRVS+c*bmOwcOfzuIwj=5|Cv!DI`EshRVt{83jn8{bb!5)-mEi z&W@r+-AW(0hG-xQje;JmVO<{vfO+&cEdsWZFP})Ra3ibaZ8^$Pjpv1h zR91Cep++1k-D>qV_Dnn(+nd>bqrCE2`%)ecV)RTyJGZDC-2~G0oEBS$j)pzhF|J#< z*v)Y?eOM?+=3KXE$;aukz^F@SIV6IPts4bFG7MVq=JOY^I@19T6UMX0G{y4QSC4m~ z81KhnPtw7MO!27TjZ^Ir4ymE#rB>r&{-CVHY&EA6No3pH>+9ZPnC3lV?aI5k?eVVp z_&+@v<{iANAop6`o{4C&0ZL+0T+>$ZNcv|B1*~DRR8pA_4lTJpm?ba58t~o;7rj%utII4a*$Calnfdt(v0%xS6 zr>35KPPoA;S7RAwHB&1?SNv>ndBg@m04cL&_E_Jro8(P;vpxMhV5RxZ^hI%1zHA5U z;{*Nt>@5Ou+6X`C)N9=XE&8*!9MYPNM=q9iY~+qWsD~DwQ%EJeFw)VfkKp8NK%2y2 zU)u~-nO6r{jb-y*j;)%CH^_m<(>ipecaBx$TUl(zwkvfT_w0Op)gkgc{c6@DZiC!j zmyemeuf;vyn_6BSHO49zkGb`bf=L`=KS(pNk=x?7&KN8q>IlQG z$}c*>>_@5m7s`artm1$V3zsh#I&EL51w!SgZ0l@%bPe- zv0a3zUx{-Dy++Yl>R%ahoUbwpt5m!Pm&%sD(C(3>RDw?>+ONF$((9=eNupVx5B=HvK&vU3~J=UW~ZHGg%|Op)Kr$91Y|(n)~_Y4V9Lv2y;aQ8gATPjJ3=R6?s}ZI z(fO*#?X}{5dTF)B0muvR5Finl>uU*Yw zm>`OB9W1hTj9Xp!EU{)udu3|R#!L9*eVAk1(FFTmS^P$6&C0PIFL--214vK#MEYy9 zD=caw9J~aD!*mCAH%!%CWxoYo-Me^XiR9g-L2ms-(3d1>U=NJf1oVr zEM{S+{Viuk?u$JTAGk9%W(M0*EhKF6dSS~Os!o%odr;e$wzT?^m@d_X{^Vk%Elhl) zjgMaud2;jKr-#`uSLzM&V@0mH*Mt;hxV84c+6x_1hEGvT$k9)6Q=hr&44_UiPreR?1RTtARgkCpx=nwD`F3Kf@iu0BrqtOEU z`40zN0Q6lfyy2KiV}hn?)cMHakYcQL&Y;h$7-QX8CikNI5?q(M<(~!x!S|j9`TI8& zSOIjwf%oGor|914FCF+(OlX7R#n2H2f3bHVgM*{qO(y8IBty%9sp@3o3X9t>vtx=D z9`H)91^vW(W+ckmu`!&}jG)(2eM=`O34ojot`!*3D@wVc+AMiB_x-n$M z#HV4MZ%A$ISi8E#W1eT|Ru%{~)S9<kG|mD{{?iH#ts43Z^`>+;XcMTGD9b z?w@(CV1z!G!9$5Z)S|r6ZExHpD9L0kLA^t>WPgxqoC7R@*FRVScqhM#HbAOAyggt= zILkh)`iQTP6b8pc>VJM7^Erf2u{pOwdr%Y}$196$Md16!dYkur@8&V* z5t2Yz%PT7m(wiTnQIcGGSj9#l{Nj&ZzN|SJB~sqW!v+5R#XsBel#jrWBzJ`WXrCi$ zPc!rT&0&`p4%&!6$NWQ@&RgnFd%{wa3$nqUy=U75s)tevo`OJg8Tl#5?}#sxF-)4ss7+0JiQNQ zU7O9fcO5uVMLcw&fry9T*K_pWu8r^vMT2T!qF8s%gBoe|>~G0Ouj` zA60#Pdknnjp|H|{(f;!eX8zm;qBrTVx|Hpq{8ub+<;kcbQjAcY>y_}i=yZ;2s&@P!ai6x4lwIgsKRw*r%Q z#}Q9Z6y#Cji$}xGpK6$>6|j~d0aF+G+tew$z9Bblsk2wM><7>=#iln}%wEzsVi3&Z zvD!|(Ens&jq9xQLvo;mh=VjPH+nu|3&>_OmeUApF8I*Q=z=tQj=3b`M5@TrUiC7rDpl z!-H2a64j&^RFtQOpYse2 z2lOA>PPDEKpjKoW=VQbRFcyCP{t@p`ixEj2$B~;Wfv*#&kn-2WU4(wq9ocxYYv-bo zqaM0z4GA!}ns!q4W|Dw-a2X)`*&PttDz?+L9C`zxuMjG7auxR5 zK|be<2askrb;bXtSjorVvh%lUtkH{UOz`X}Vr0rw4PDXtb_D6@ z1vjAf|0dVTY8?42{6XHfp;biLK@t$aR+$9e>D8|J5>k!%c6vR_w!`7>afHSoWv?^y zQPBt_31alG`uNKGRo`!$^DqUkhya3Pa`Z8}%SjBi%VDHZW%Ods%0$vaS3^U?ZqIPx z#~m8qv08-3m)NU?Ve|H@SOBf!1tuwn01|mDyMOL9$aLPU!*)g&wLj2JJ!z$uy zWBV45&C}BT%bi^A7r3v@H>a12RE*$FJ1sV8jqWqCWcJZ%Pu68Dy0b~b7sKqEGg-nu ztwkC1QG^OHI6T2=?)g(;(deCOr)r;;^qRW=f`2m#tGe1J;B(Ta=gzrzNg^ql8reY7}grL4E9# z)vVaQ@mq6mhj|&ud>$UVJ^RX4yMaES($*tPQ}68I}!^X0cE`q^RH zugbK}_w5IzgTIR@%@(h`!XPX>C^1=-3M^0pwD+_y#-<^vt2L7nLl~Qrv6R<*Rf&pgxN7l_*&5 zdx>My-1=Z_j~9BP`k5G*+LgP#ud~OR6EyF2OQ-|)raUq_5_@Z0(KOFCp_W3Xy1o*{ zY8fBRdi^}Hj8y$1ouzLgfzzU22!%{w#MXPO_$D^4S7 znN2U7!q#bWwNP*NM|KJ%^t!UC%zXU3GA<5{cEV*M0g#|#PZp4EskOPEV@|w&FUESm zD|3Lk81AmMVAE4unxa(9fzD$&)T- zJ9a8db;`VAqKXoObO~HJ&Hm=RUpKXD#)q2jKF-L;JdI#v&>*;b|2OM>NKM#p03(6Q zvo1u&Ua>@i3ejj`Mo^wtV$p#HTf2LLI%?}$(saH|)ZPb<+Bbwb;`z|`QXkAaR+jRt zbb8yNeC^vZu7-Q?nh@&dOPU+vTl=g?#!!R(TE#)1mz z#^V_^)gCwS1@B4qCJ)+#>DfiP@4j-FTFG=5T|C7ZEMxK+KkU-w5q3__TbjN7(cEdV z^gD7FU8KIS4zE$rpLB7RudwF|Tuj}kUll{Buyr6ee#uj0jgQD{n=ZF}%`D*e=c5KS zE7(Yv>;%oOFe@8>luc=OXQyRD>T*sONno?lsv#0sK#C zYRXEfj!-FQ{DA94E}JK~G2{U%=nV7@aC9`6dJ2p-$5mUsjR$b_xH{`5T8Vwb0*4fb z1xfu>^o7r)aueJ;`VIkMt9=38T_I`P(#&T~;#qUlU&LWYae?407Ow0eZ&!jJR;_l^ z{9TgRQe5Me3g>65Q=xSQVf-$z&(HDvrGp~8{cpvUO<@HEa#?v^us66&^3VD%{Z`ak z;I*Da8n$NoJD|}i{r#J&cvSgjRL-*?2j~HvfpAVe?zOJiDuc9v72w+QtZQsIwO%Kl zfN_8o9G2=OJ3qz=^Q#yET?mC~$&0hE+u4 zbStwz`EO>(f9h#!l{|-J$Lkwc%Z9wj5Ps8Dxmg}I1dt+>CHC=ezKoet8gb_c6yo*v zim|6#)N_Q7F{eY+D)S-5+FyS*?hb zrdICn=1X+NqllBQyu`~J@fG35&QJ_+$#C#5sisVE{1yzpkmH=9X^MXF)3A(LZ$@NE zPK`F~QADw`7ao}!|4d0)|%~ie!)t{!9yK_-@>9aL5*=)m65;N9^z6t562BP99^q4yRStSci zQd;R;Yn?i;%NdD%J^i>$LE>ko@bn%6zpiGiV{Or#|J)fWKC3r_G@TbJTyQc*Or5Ij zt1>U3kCwWA^T?@4s8}z>c>Mn{8@BoQf+>%(sy~#XGw#-&Sjz!Z8WSni&gW* zX@xR4C&a?<_f$aL^u+PoW9r{g#)m<&`*9`>Pf!^`$nfV}%re((0#4ziE{gkyY+L-bE5!nS&ZkfF6FRZbfUYfkvMMpkOZoaUF3-zeLuQ=}{8-7Iox~bsp zY;L2yunN#h)}FVPbW+=mDQ|e6{wn$Y^oWZDUI}|mFp$=7J1es9$&98Si0Cce-1mCa zn)Iw`Z0t$WjrAxEoF52b3ob^~-4Pdbd-JhV%xL8N^_2$KouJOp3cEPA!GY_`qBi5; zXvI7Z65#o@a(fto$H|(v`9)FB{m$bK<#UiGGMy;a+^x(xt~fcM{H*HCw)kNJF!S~B zL=~z&r-&U)QVaDQ@~dq4;R^yI&2=)ng7g!z963DKoO?%0_J-4}CZo_ZFm z^@S}x{EFC=(*;p5mA_9mlN9Q@vyyVLc+EGFYuRwTtSzgt^ipwt>0y97IQ|WHFjY5g z+fCpzU+rQ_aW?c6=e&?GR6hrm;NA`ms(W4UnBqo)sKW*rn2)B;!}R9!cPek#ECNBD z)A$=YV+H!hK6Q=?Tn+4m4cM&43aQ!KJ@2wk%p48F<)Jk8=wOvm?6yZ%{~+)nS7Luv zql4^<00NL-_-TNf8>_o0eRI+WUqIG{!v-w*61s)H#oP&QtfG%h*QNUK^O2Uvu{Rx#Ah<@=Wl_sl46wof)epWhB>!^l%3F8}uFgIfGBD??MWhFw;`ZxwI z_BY#njS~r>wQOb{IG6SQm<72L@^w!HStWGw%7M)I8yIl!e(fnhnXC)<}#!0NUv z#iWUkF-E4DIO|OCVij@6tAy^|h|spW7HQ02({xKDafIfx_hrG zuu$8{A**iNP>3~SL<&=2#3R40alpL{OT!NGi5eMnZ*7%`N(YnfSvY}k@Fy{X1N z^*azlNrTsV;e&Zgt!V6fk?+1DT>;7jpqWa8?6Q<{Zlr`(AcG`t_rTs;_tgn)EtIMY z9u=LhG{P_7`b*6NPxwt=9>NKec=&0SM$D%Nati_RKM8kWsK2)WpimxM5`Z?&^;IPs zPM>JA|033dvP_^Vn(J|s^$Bz_cypbzhDW^Us0SXE!TC!vKNSLehp44|`UXeEXstHp z*j>zhA7;TD%#q;Mpw}>#Hq=$$HZD`gb=Fnfd!Xo6aR<7z91#6gac35BXMYO3HuZW+ z-4a$7pN*@kE#34ch29qjk^Fu-!S3&+5bI@TKK!u7A+=B$Y4ymFPWfjyp01IVjpms5 z6ze5-W!Y)1(C%+Gh2Z4Urw*RsPq#!p~u6b|X^VRw*qK=b% zBw1@7d6B1cMr1a&5Xy`C+DJdG1bv1vF(cChMr1BQyHL}dd%y)?{e2SVthgG9@c(9w z^wXe0uKa(dCj(sF@EKg-Jo2}|Ndo1j#|L((@HhAQ{{m?R!S8=k1L{AE{b8v8Yspr% zBkI=U+>)=nlULymZswbR`9bM+_X0{Q8>-?L5U^D4l_&|)`SaBWRQ&^UL6Cqf@Jy}E z9F@gkO(Mvws7=T*55@X2as&E!+q1b<>qw`&q~v}Bh$TJP#+JSMkc-o{-}ydTj`>` zRgBD$5ScEU;3!`i zHV>bQP!@h%8{tpjT%&(4|8C7o{-~!P&OUrth$mi5A6jS|68H~t6nqwsQyg~k*dy5v zl5k6MDz1C;rB&wRnpR^N%w_ZG%v@Y6li034vwFELhN%{mr+Ere55B(b@7Yq}G<=;1 zbzeA+>o=+hQP+N~v%g*6fS)p@;tk@u-Cew}?;!bTMHe`|SQ}13T{quEr;4_Li|0hd zzE4A+IvcIQ988TDyXW!}WA)8XJ70asM^m9If{pdX%N2+fcBxIkw#5VfE)66>oe3Oh zKIu59%q#?5EGle|-ntfl*b z5Sn|!9_y2BUYi7ZgEAq4d}zqjB(FN+`D2qIW19E{ppkHmLR*}HI`i-@kNX-Gp;vn8LzLtKG(;GN^q`o zabE(b8e@wZ!Jy3l9#jiA=<8$-{IKt5|8IXtwGO7 zPT}og2tTznv!|FdoY&sU>WTmw?D;;Yxky&;BdMivN+#hn5VMrYFTcn74shn+^HrE8 zG2{Z}3FPN1gxCXj^gV?L1mTLIkPVdPe=QN)06prn05hnZX8%^>fv)j?Lu~j@T>0S& z|JQK^c**~!H~XKA0hjduf5XrDj~f3+jsE~i{&!~iYJ@kBF@+;Zb#gS*0XW%EF_o+N^>XL6%!ALlW z1`yp$qh)sX%w0p={48QQ@ zoWQ~5M0(rL@AC%R7=A=+JRI+sk-p98c+$V)=N&GS7@4=E5 zw{ju^02MO)0~J0Dz*A8XJ=g^y|5lG{j*UhpNNZ~3V?cb5JBIn zGmEPnc|OIP>$U%)H$S{>eEiD3j>3uMo-Tu58nKiJTv$ID0sKCW*aoPIRf33;gebK= zYOKXnXX^#j_~z|y{Vs#DpUqoJKio6det%}uC5&yTkrW7LuGe5(cV(OVzY?f#>``bS z<0+Tffs$a?Ya4kd#dizFf_vLQ(`Szg++_F5oqNnVpdGVZhR0!y>v0_ur$N$7)1STU z>w6m6|5;3rw=5X^I#yUo_K67e4B^jxbm5jY4Xq*E9BKfaqAnJCgv$NQ+EPp6)1Eu| zm&|K0x;P3GThsBcr@nBCKPUdF4;O)O>Mpw{L_xq#UIs^neUJHH6uD?TGAhrT6La!P zv**6ljlBW-v+A@Q4b_eOq_xlEerh1@yO?i{KF|(15Z`Zj5p(djLgetJ1I4Bg9x4$~ zobWtBr=@8itQI*^ttEF)?d*>k;TmBYIE|PAO@n`IfkJqHv{~0>C9VZ8OoJgb`92LE z6-9V7?=O&u={Evu3Q*~9>X1nMq8yUXx;|M^T|t_Vm~Fb>R}V zt2k)=o29!ZiX;4ppUG4ILIixuj9bk!m7l8teRkQ9Db^pRbqMb`8*U;vto>WQ2PQ%Q zVLW%j=*P8HAm4uUE#HP5?6Xu0Xo&oBNCCGV`Cyz;R3D_3U>B&6&WXwn$A5UWLkN-+ zI0t|{Z~&{a_Fywl`hm@SXi4BHQ>0`uzt5GtH>k79%}E=>ms}+L50YD@_RR(W(eNL; zVZ5VZI)Y$rRUn}J`n;-!ct4W3i1%1J+T?nf-kXoqr0F~X|LDorP~4#>azK2A*# zb2yDV!}9kfSTJlFNZdEz7NL4pA3&Hv*s<8Xo*gO0sWjg3OjNCqzye9|{Q7v;YSV`0B69Pu*_gs^BbK!!y=c&J z7m+ZiaqqT2|A&u!rzCM15(~)ppL~oQ4FIa7g6o+4BAqWZIQgq7nBKHBngTsn`5fGS z6oSsvFRFUZX#VqFNjMjss#-7Zg&~|kAuNqZp>vsJC^=lSA1Af-zb_f=2KOn1U!3Dh z9k3o0PEH;1Kjnns5&y1B%{Xna2Qz>lW_yD@t-{+5J*qr+y1b299?GtMi`H&98y*9<0D$L$s7fnrv}C2WcQ=p2RpIy zlllcHk~~ph-dDqCv+2|lZ}WV}76*6HHGcfqM+acG6_qa;<9<+DhJosk23fsTV&^Bl z!9B2t3pWR5Q~eru4~b_wPQonzlAZKT{IIdHlRag(qQcK#y*4tc!JYhKpvZFQusCcC z2`EF2L)mvd!Tmlbqd4FY{w4FCDTgOu5zgqhxJ7{wfhlX@0)7>i^1K3)@8>c9O;V0WOA++KI->UqAFeLm7rN?TC2b2 zvNjMjo)Pq^PQ{x{0?>m%LGGJR4Z!V=@y{d#;I7pZu~7kT-`WJn|D{wOfXwE5j46UE z1|ER{_ltqc<07lo&`~zK2~jR#ipnX=DMGJpK_9~cW5g;d@`lU%1X-;?`$WTR*kt{r z&?n`a2>0~uW@NrRBi;Io6urSiBYq&S!E_$J)Qi-(OzziU8AVUL6qmK6#en#A`d848>cDUD0 zG^ow*+}^n$jdX_^_iy`$KLunFoxAgJFQ;$i2#H3bq2I?Zv~S_M1gwApFQSNR&f#dh zM^f|Uj!;9x5vX1Z{}PLNqEggo7!2N^h~bgK9T>hbPh3Bcl^urTmjcMIkrK(qo;Idy zc8u}Fbn8aR#~Od(duOOx?7Viw6}oIGOHsEoJ2I%~uFyFh(fBgi$tgGy!Nz%AYb>-E z(K)`t2hKHUhU}q6leN67dI`F@))kp02#-ni=>+nbu=+_Qgm3!KDsVii8Coka@=~2e z%aBeR?Kn-$Yn|O!VJS?xx{iku^<;lllpUJke=WS>NW@BmBl8yWc|Z1X_%lA{nC7~@ zMJ_b)t#7#!;wYY7uHog+{M=y8q>et{4vv76)niF38c{EU`uXhaa>1pBqPPaw-S)hY zsRsk4mI*i#lZ|z+tp0^`4$F?r`lOr-&qw$-5C*XKM;nKalgK?ZfIX_pbLbAD*yEwN zZ^Z2-+f{F_bB2b5kS}CdcNFs1}de^N$#pCEwf|qSLp-HB4N(aYG zh0w0{U9awdwvyo#tylUAAn*~th=QNxUxKYoxg~>G=8R6tG`nd@{_ps6oEnKk^mxlE zQLEG7HqiJwZJ5L1ZnIk)b@})bs5?n=E^sK zIcL6Qz!#520oRzuTcgM<2Ey-`iu?UBRUbdS`KoIsnei<&z4#WIJb@RfFZ$^3u7~1zj#9^=W&a~=uKfD0yW&Jkx#;d$Wz0Javv6vrB)8Gji9HMu}z8pBKQXb4K zOK>3wx*JNwI3qUq`8`xhv*J^wsH>_*HuX8FMu%2w**1|9iwi;Dzr5ECQnI->#3(-- z;7J#|K~9dL1$dqwwX63byjCt;(l&qg0)HKkF0` zvF5)M!J8c*EIXq+{s(>dt6yBh_&7>NE%&p#mKfxnEk~Hb*ibUQ*7>4`nQz*97x(6H z`6e@EEC~1yjE8%H^sYqynmL^j!R-K_2yVk2ERh6#<^N=H1dw-Sxia9;LuLc;A_E>g>qa z*LCDk-yA#p&~TI(yCGkL-EmoPa0A@R_4a)U7sxSjjLRQ|&mWR~-^m5;6t06A62QRy zVY78NaA4qZ*wzJc#bC{6Neip4uuxgLT`#h>z%N%4Gfay2TK0#j<>A|fNTtK zunmRqKgHc2HvObr2joqsCpJl*fP~^inUO{jDAPK?{{%luFhyJuyP>8IRPdYURV#5* zFg;;2Toshy4&@i5^*-PNn2#nspRJ!dUf+32<=n@kiB%$JZ)pPUodMc=P%{7b^EkG% zSwr#vwD;a|QKd_}r~ws35y3DB0uma@NhAvj2n{VrmW)a;AX#!&L_}f}B`Jc4WDuc= zA}AR~az>y@lC$Jn3!NEtX77E!d(J)Q`<aDlxsj8=HeW>KWH^6hpT(^ON z`AX zJ8KD!c^>n(VC7#u-|v1UDXxVKG)HlG??JLKvDepsXE7#NnC63 zbwpxFZ|i@Qn1t{-ck~U@ev`ZA*_F%B2NLu(0_Pp54~@trP)BN6T6%U&`vA0MtajN> z1OA~;16!H4eHLoL#$^@hRyJY1$+XvwOZgVLA3btp&(ZVc)JIRzvg|K)d-$?F}~h7GMb9Or3;SaZi&!jTQzjg7kyZE zo^u?Is-xKHfnJxs#Pv~7!o#~BcfW|$1Ks{^vtEH~cI4PFiw8P9KfmD;7kAX5(sEL+ zy){HA?%&9?YX7hQ*-1?LS~K=436ospy#~ zk=cpysc8R_>uyozwdeDX9rR0Ie?0tXM>Vlg-*%miB#=1TLCIxd`oqnWG z_9mlbo*c7|^Tfd)FVd#0_K{&$4-_kWZcSpeJvhm(=b9~3Up#SftaHpxqsS(jO(S0^ zCQ?8<*Q0cb5VhBiTSJ!1yXtAXkvt(OW z+~PbX?t!j|xrd*NiN<$xFjd7@nN^Ao#)cfyi#^DUA?ym3U29th78oNY9gLmHd=~+h>+W+w z{!)e$#_8&(!tIXM_zfaYDZ}-V|G0;DafD>fa}ZwtsxQw#F?w#Qd&Tk{_r8Ks1~E>B z{KXR-_EijhSA+V`5TPh8NICsh%5cq`!|N>x0f@|1gb*cU3?~u{%dnN=BGPK_TOmSi zcE8hxKa|y%al46Nq!f2|&un0nxieqXMhP)?4zCv!6syj{ciZUA^PF)qB_+DXbOqTb zXPN6v4ogESL=*PECf(vbQjE3V!+)bq$1rMXx~yh(F-&66#5UPU&9au;S+`;5H*c6O zP1C9QmmX;z!>pTx=?_Tz@)4ToAxn{X4^I35_JS3WqdEaL!*e4#R)3&@y;c>G*4T>rpkKmPvXl(co$te1X}uX4|7%vlqe z${!BNvXuPcwOgNs1#_3{=0506hw41x_x>D1 zS42|Epk%6gyz8}w0X}-gloU6s@cz}(2P4u-HN=cE$^CmXdQyh#W}CCyLjI=%WANzu)px@w!I8wf`=?)hF^YZPT!v!=I2Jz0*;e!vJX%TDk3JP;W5n%tx|nk0`lF3;hc=& z@MMhc4FdoMa z4m%NG^{EDJj)6(@K8#tY^VE+sLn%(44$xP6)6r0}s#dH$b1#Lpo0eVcOvrVTF1=Uk z$z&}>S8(!IO3TOVNHK+vB4`x2xcTu12Gem9MQRN-iut z*lU{N<(jEPyG}r9c}K5pA0unekX$;-pykzlYtd}U&#H5Gxeii>%1(Ah6oruOl8}!H z(htI3A9=w}Op0MUMC(>Vj@c!Cp#KN~mh|#gQaHRP76vDa<}MY}%JsKfB~5SB71Gy6$E818{K4}DWF$(6}iGC2Nk zjeQktkNBq0cf^V9o@q3#G<8<`Yr%E5 z{kGj3kICa@Lv|rreB6eM!d^Ybq%-(sMuAT~XJFCtfSj(gAA5iy0apAsL84tTaIu&? z6>1EZqTsROGRcuE7y6EtGFV^JJ5C$^ytgc-d%qIPRk9R0Y?hD$?)pr9%_t>4({UFJDqHoPPB!s7Z$#pD(k#Uh-w{8)eG0Z z=1gEhyKmctuJ*OMGO2S!GatmN8pe7!Q1tv>SXX0s{iba19SgtLwdv6xPmt`t_-T5) zBurPXFR1C(>tN}~*Tlo|QBuA;N+s(~zv(0dMb;)nts0TF`0XM_DeJ~EE-OT_v8tuA z_pRY{A@H-x%l#P0STh_?TSmFm;+*iS^5Jb@`Nat=Gs^8s~t=C4-u11{N2Kn=1Si z!|LvDuGYr!Cz_t)}^IpGscfyT)q~c}B&}GI+UIp*3}f=jzQpdY@6r zk-Gh+;OBf)k>!Ckzn9YX-S!lZ!nK1m=z6YjK^Xs>HhlU-Q zs>EW7)SpZ@pZ00rv}Gm0rcjNOnBCI3dg0uc%M7Y?15V88^Hu}1%*(FSt|m$Nk&zf8 z+sEQ6VAfv7KQF0o99XohjH_xCNY8gFl=C0ue|wY_?1RDbz4Nf9df%@p8OBdI8CdLv zP57*Np*uO#dZB3n*oGo`ly7mV^BM!CD=z5a!vz6M^rKXZ~=>Y3%s zdDjW($Y{eQ3GVds5Rem(QfPyH*s1jR5U3j!^^f9q-rL6{#<{7g>vAog0lT#}T34o^ z5W?+WeA=vtOY_N@>It8oxcu3tX%bvSvOMQTm>tXFvrYoiaT>0aA7lm*j&$>M@Rc%< zCye@^3YRpa9y&^dQoLpC!tQ|{R#QBQ2A`_gcXt;N%1UfQlcRXm=H}(}H?KwK#BVk~ zO{?Af#JjQ@Db<+MG;29d!`WmzvUBpGJ{c)PBEo#&AqDZc9chvPB(o2^uwd9z8vrk| zJni$O7>~sR111Dm+9QMZe&Qb9R^v3zOF56X=2_>e);+Ua$}VCK$@n3k+Ce6pF8$9$79@eIe#rGSQrOKJ?>;jWD9_to)GG zlm*er?kjtfr3M$vd4AXIF~GIrAI+I{zIjGe*9rsfbwz6|pKiWWNL2HrW`xMBQAbIO zsbY_Ed>bfBb=`ZYF;>9&1UGk>uLXjYpkwl1@F2j-;CPDx%wB(0NdmU#< zA6BKZp5Dm8=xjw#s9u)NZw5QT@{;*@b7#fELW(*atP0P=J8xhQeuucLw5*1!R3$!C zT1cNf4e)ifEB5H-jFtV}PAb9mr)qP*NAHfykF89X;PQ@&BBpZ_CxM%{wENQS5WmYm zeZ&92+TzCkwsoD52XhYd^?_5KKAueCTtp46_*w#T@*?wAGR!**-!cu*p?Y6=mqm8m z$g`1dF=p+hrnJdZJnO4+J#InJ`i3=$W?KfKp8z49NPHKQrTBE>Aok9*FFz*hb|+xu zd59Myi4ij!*QuxvfMMoEoz&c*f?UsyteW~u@h&}`ialOA`Gr&BToZqk+-%w`>(8}gv(_U~(crJ+--_gZrE?1pHP zV$rLG(MEX$?+_mli)0A@qImt_*Gw(ob6-`c=87AFe~WI|^P`oauMlB0D_Kq|TxyOE zl1;z|Q@7PTQoY^dI11qaJLzo|itEpx|-I9qv@eXv6U{Gq}41q=edlIpLm| z7{UqO{WgjQP+ho(SX1&wAz2l&uVV>BY~#cMWQbDU`4=>tM4H zh%gD)5Ar6fKx+J@5LZB{*TbCQ}m~nNR9!KStA-chz~!{9S+My(;Un|Chql zFJ*Xj@HPSfx+L2V!D}M3ca$uhIMnTp;3z-PQ`{043%v;ofA*ZvL45TYJ55;x&s*Hr@OSOYJ$hGk3K0T)y1&ZhonI=>*?P z%D)@rva*p@$U&R$EG}a(Zxol8;Uc=v{X3Qvvuo&aOBHxV&vAa)*wW=1=~(B<-t}mS zwBY)aos}(RAF6G|*h1B}oE%dw7n+3X!=Tpd2yX}soOc9GGJJGr2fH8?yYENrfKrHN z9;9|u+-%QH6xkRM!@skVx42gYuNVyOJCvu~>u~k;nVTw{uN)sO2qCr>_uDQ)d>#m; zf@X&)%}#<{fJz9eDcY+L{oQLr=YiG_&J%97F;7 z3mL5c%-JCS8^^w>0>=q+o(?X61dM)&6Ze4J7QtYp2153cMmC+`Pn)iWQ39XRp)Sj^_Eh(d;CS;Oe)Zf){#GGC5Zu$gJ9CTtKGkh8t~QRi z{#V4^q6)E#fonQ!U7Jp4CJ!fRCWwQSeM86*=Hg9Q&70l=AT-iS#FcZa#zS6?Zm53r z8az>W*4$G7I`c*!1l7A4Akc{o6PHp+hkG1?b1@-TCfd)*6R! zRY$ZRw#e4(@XMOdIi8I#tiW}(yuKo6rRpD`PkxgI6K@|N|Bty0?wfY#&zKnt1k_2I+!8?4wUsR$Iol{4a{rvaK zu3u`aM_8(A1s$!aL2j8KlSW?KG)$+B&|m>#c5R8&W1s!FP+6p7f&5NnZH z!m5U~$w4tWHPZ!0)|BnBxHG3O&-`wVb`jBfw0a;cNWyPE?@8G<74XfGFBARoDVhNR zyB$JgHk@Tk73sJfZ^Q(f9J^_yB|G<|W_;>b?5x1EI6Q}22y43-q#&(sJV)Ej`$(BL z%|lg6rp{K0&E3a@Sx^C7a1&?THT_t&Dr+lPw(ItMJbL_*>ye#jgmhf4a=IvxjCyjTw6`X&g1AXk~^$abB%p{p;v9iIASrbwazv7{qY6@0wl;P z?jrp}FSwJ|az7ZAPg?|EWc=!--TmZ;TJPrw{}BTo zEaJ43>w!`Rhk0vthVXdu>fSyuB@PSJrxK&g^Rz8C#|GB-P>$Ro8A-1Vet&S}Aa=Lr zBz5?}(cRO5LOHjpD=E`nE~MkNZRhQ77As3ov?qEA^d(s8#vWFRlC64qSRLo8Q?Y_Z zVH;D>yDiQQ%*XB#tE{RxiCX;J99(kARl&4yoO_u)xXdNjfBkN?_060Sjz)*ro^F1l zy*)go8@izt(g$rC$|(O!#;>ZqLf5)vYj9ynx}dcGyZpP&)_t=8vv2|BFH2r}-C`T# zPKwF%R;@`D)pc!ppIt^C&JQ{<+*l5+!*52e;aoqruOtg}ZJv3l-nCJ~AG4+~ltlHx9c>pQYQZ+_g?%zm69b{cMjP;E?Ta%@yj-GtN$Kn*_#gB<|P{AQo zd)f#d(DntOsboV|W;l-DJh8de$q%_0;W&`;7=NzC%9bLMko}SW)Qi|E-&ITIkq?d$}cp__4-# z#f|+Wbx+O&8}J;Kj(zQj3Xf+T*nunx7X)GY!BNOiCsIhydci|{r#jf_Gi%d3F(Ty*l}b5J0OKnHML?1^WRBRVO=$+G3gh%s#I9=2O(9cal1f?7W7P}KJkH} z@~%I)0D5e}x6?*ZmnhrP;Z)Km^JDQNLpKYS#g~xH$7M-|+z>t7&_xUtGRnb#0WR{8 zdF~z|DiE7k1RBMz*h)?MwHRS7JSaKlSQVi`oKr+eACB*?#9bvI%{VR2mHDtC0Z{HZ zm>wZ%*}uR#l&`$!jXjC?!?3JpMbA5PpFrr;{Mf@6`~V3AIF|aA-N1C01oP$nxUR(l~t!LK_K1|qm_3-Mgo@f!+-cSKOfLJDCIFS+b zA7OvQE!HpGLSmdaVLyGc=$tpKd&n?RB|+InclvFn@02*(49V%`T#DI1)`R;91PJCg zOknV4GyK-t@4=7jn5Y|$Lcr@zbsPU%eYeueq%HpoPrhk44t=~v(IQR_;vtahe1YUT zElpv$^l$I`>vk+{)<&;Y;$jtjs-&=G86Z zPB670fIdqL1l%(}1aB0eMe2cIOLW1OB&k?e9LB_Dx7s9#dl0!YN_So9KmtFOfaU%J zXpT+prm`A;);sMgkGi5h+n*$^1x+z>NkOL9z5zLoI#EL~HjIRM5V?U1?N7%`@^Pd= zlg)Wz6?^lSc-Kw$k){_*?RsYqo|ydc1n|pPc#vWWq|SjP#mOKYlJa4E=HzzPgtSuo zDO-5_lSvk(Z23-4Z{Kl7+>K|Xm_{W}OOW3YHF;pi3bMSVgfyeVU#k0EoO$KgOP>+!{MS@@} z2y`8hK=;_~2f0=^+K~vCPKQw>Lg(FB-^~;^oWGn-2 z#)|o~Wm52jy|`yzSgRQuqi^JRA+rsXD+o`Mhi}Qmq->g9(9{bIa&Qb|>C%}p^#XXb z^esYybn8tIcqKy3;S4rFk0Nf9+QqAH(}GTGkK0>GO%|XPb9-ajGQruAR!V`ttWP5U zt1k8sMn(e5kibB2S12IA(0u`@@Zjd#4BiGj8z}yiu<{3YFL0q1-fx5T2qb@;4f*qb z$RA&UAnpqFTbp}Tf^g-*y`)K+?2{-g@rYcn`aC4#61J0ZUt*C{H2u3%AP4MdhweT6 zedJ>}(ZBKJ;OeP%6hPBn6yo)%c)u%_{FR8-*KpInX1@c1uHE|3wHH8+#9!B;V*W?9 z=WuEHvCr%np$K&J5@lt0Y2}0f>ZumM8zV3?L!F9f$%5o{-@@8!*;FQ%P}R?Z%Od;-h{O{XwTs?dK6{5H z2<7`(Rp5<4NYD_gn%qrv5`?>Yb8|sM6%eR2TJG5O3M`#ag|hP}#8lCPiS$-U4Yk2m zdCoIVC)-cWO7%D-ipK0l5`fxdcGDl6n3?BXD%d*{;|;x(rinGKLSWB5Ky1hj5l`!t zs57khs!Dn{B=`2t1ptZRh3mVH9=O8O8Q#kdD=yp`A4 zDKRl1^WhAeu|u~b&!%x+H!ILYNVRLRhcVY?qq5NiZ&*Q1SIl9R>OB$rBC@C21C6~W zC)v2wcQ|2YzF$!>N)Qde?A9AesI3a+9aXou$}qIBbs&9FiDiTMWEH&F+&sP6B^-377(r2@yO zgKVcD$Esn3^@O0zx4KE}{t98UGy$4r+}3b00rvG1DmOP5#EL?o`lo<^iSh)gFBU>T zhw!ShIKnY8Ldq!7<^1)Hv4ZM@YfxY+m~6zPO$t2Z zHyPDDwjHi&>svp)+}EwJ)RUaBR=~*Wy4HS33~~et?(6#=$o3jf%YB|9Pk24Xanp=0 z^O`rlNcfWM(hYX0dlfH0Z`O%aiqev0`cE*Q!#REcR`L2AJvzGk(xZaqSzARlq&}s= z!sRFv18M4lD3zcNN>m?%!6HA=O?yl4iO#@jF7MUMaM3dgvS)o>VP(AR&g>Z0X39jrR_#&{pzH}_ls6qXdvX^x}@X!5R+IvQYSq(`P*)w6RC$F; z7oE-J3X&{k={}uHld(a~MuEH^hSBFrCEvTXuB)){nlc9p)sZ)v+MU^w$D7T1T~wjP zkD_3Kzc`m6$xoR5S^qmmg=px3(BO&$_}YNaM1e(7pJH z*RpS>IP130i@BC}GzsHFWq8Qz%Wc&Z$j)u?n1VG-fIKXZOJhyrQswT1mY9Odbk?>! z*R(u`7MAr8DxC)Ig7a^40Zj1gyd)+WU;sL}=@N=Ypu*|}2t!x$KH zPMspaj*b8wtyjC|Xf{WdzLcR3R5PvW8*5avs6YRDmMOo_#N9G@Rhe}5{BmIg9OML=Z>KUGb z(7}ig0^HK~n%MQ&iUFqhNHbZ-*vr~tN$ZY6cwazZKx;K9i*4&8GY2!jm&8>YE>_O6a=^aWbKs z`PCZ(?GATu-?@9j!&^_a_0&_~f5!nWdCCJYAO`{m2Y{yck)I#Qg9-?1uA3?q+jr=3 z30g{K;`e>=x&uxnCNSk7N^ureENEDW@IG)&h$>nZQV7VE&$lAksVyirj5?k`>{FH@ zS5l5uQga`F_O!Rg8D0fhWjX++ynH=J0gncJk4N=|!WVMGOcZi-CiD$? z_XF@07c9;H zuwnuPlrf|$N5jv}&>e`oN~t7=liIj8Oj}b!3s-379-tkVpVJefE;_MoCoO?vIqFD@ z2xVYmyBFdtv{`UBGkqnT)a-Kd0=Qx_G6xTAbrl)Bp@7^s4oOkohqXbyK@Nxf?KY9+ zYasF`&@~7_bVmT92qT2>OWp+yLBBy_`=8)`mMpBeSrp<)0ck{ePeKdM0brYb7asiy z9W_9VANl1GT|iuPInqq?07SCQk@*@SBO%FlbdrKRASDw5TtG*RuR${gc^Oo?zDfw5 zh513CYiVn3Bl5$&6;3UO}DLT_M|S zy*eoPE=+vUeMBbC-1kV%G)gH)v(t* z9^P$tkfsHD;TZjaI-loAIprj>oT4=U>Nj(GE-G`yD+-=E3~Fpxe0qrtdgFsq=SvO9 z156bbm`quO4D}dK4-ug{gtev)z%&dX-btFpOg$pNT3@hnDf!Y9?=)|zbZb(ZWpSCk z>IS)w(J=a^EWYya`=Hp(g!aevV^qzBR~H}q551j1%*30o>hV z^XE>vF^+6oEvVo`v)h8R{s@6^9KV7sQ09eB370U1w_CJ1K?SJa`m3oCPXbk((kDdO zY~X~zwTB_QOa$3w|Fne@K=}&poeFPX9Qh0pxDKL=+kDHv>*8e=$YBWo=Had5ng;x} zj_qxx@Xy=q*J&Xe_e%)Bgz(kt{Qsm0hPzZ_C|Fm_6jC#i2g(!z>caY7n&fsA09k&i zK2#W38(;ft5)0@`8KMKC`#4?rnfnT(f;EN1WH+4Ucsk4DBJ7o(kZl7c^_m;#i~IP@ zu82M#6gaI7p5__nSI2!C!A8G9UrYs6JIn_*@Ip)q)xR<+1f(w|-SppGcZa|N<|AI}^?BM_>pu*&jSuQjFbRB%r?x*!$Vss*qr5`EjjLANYg3VhZe0 zS>RX10;l^|hgu~+KlTR~;J@&*|FJ_=PGIt7jDI7nGvQJ6-tYKg0HwxMT>&U62rS-@{09GQSwXsI!q=A`pwUGa zgJe)mly|OhdLJCOD`O}+0t^^&5-8Fy9E|=)lmecT>K-Q3&sY(;y0IYpWtc6}Ml|gA)F)Lw`x&e~tuZdVN=vlo*t9bd!RO%oCa~ z1~z}V|N2I3Q0Pe>jlfd0paClwvHE@6QQ{AQ3&MJzIe^exp6v(TRzs!mge3vx~pb5grRr9bGs6Mb~eby!H41X~8|GE$0aUj-jgITLhZC&8zSlX|K^a+=i&qRC+h`tvEDdxh^;@vrj3+G;i$(FT7 zy!Q{$V~W4Q4yW4HU=9sd5-+KQV2=x!d<{?Ff*wQ-u+3>nUjfNPMOTk$MEI*w&?ghJ znBag2jf&!rHimy?KYkth-$VkWy2k~qI!gLVG`_$>Y=Luh*X(Air{o=CX@{z19?RMhe(2H@7K#%o8 zS#~5Z{11%>6Ov&*>BU9s+KyA5+73GA53Hwt|FE&Kpg~gwJtb-a)6$T+ex53(?luW?QFA-+9<;6qqzB>-2wUE3mM~w;x+{uSk^fBKHpn73L3I z9(Fmk@!)Z&F-<<2q}%Keq-y_V&VDMAbt{Tbebai49_ieABa~{DCP{amD|3Uu3c9H( zWr0nl<`)tPjD9(=4RSNWd2HWhBQl9m_0qBLhVq7XDg8F7{X87<#gW1NTJf?U) z3i~(X=fvxUv!k``ua4}m;QTQ)Ps1PnIL|Do*1=voVPyCq_Hm^kLdTqeRhTV*Uy8If zkO1gfrG{$7pUIMI$1%rTma*12>zB1o7O&5ke(75F6)^KGoEx6lZ|1J@VBG0(Vw2}R z!}XK95yJc{^UB~hW2CcxcrV$=ps+j_&Zu2LF#Ebzl4i~$DwoTwV|D|FgJPGvDqKTT zdkq{PXhZ)EX{C_6$L)q=%-Zxx%K66YCsE#8PB076I;dD6TNI=L&7SG9$}!xO%}xG(p8iIHwukn9}1X-mgHnRV>D=ddvHa@Cyd|!~`qy ztgBKNT-uUVK2O(Z^IVv>`@JEXJnK`!{pL=b%857iwO_m_(9D;-Ul*lH&Luk*ej*_; z)6TtR&NPTA-UA{Sb}km@D7+P48?pMdIaQ~rXtqk(xcZMyf>V+(t;DilH5Z-wgwLRH zl#+py6W20JT&%+YO{JGO#(EXMHciAjucQ_V&hK8<6ysb`MOrH0HHwZD4s}vf(>{-Q z*IYk2DLnK<=(Z&LQ%dFzFze)AoD*IQkjz=w>XNQ%i$#B)$0_}uxojRd930n?lF9brAE z3vx|RNC_>JO^dl+?|sKo#mWWs5g#4B*xBL(g+tHwmrNsrW?OM~Y)`_wX}Kj4ai!ElHHv(ulwdkuB3*9NRS=W@kuhayEiFGAW997O{eU@R zAKb}N_gCZ(jwY(NO1f(6Mj(9K1k-#k!I$)8?jR3lCnPJ zjjNByl5%a)7jFwx!W5M9NOcY>WSSh$Kj|MK>Q<~f%oH)cM?fxiD3=usKMY)|m5=c@Tpp|;Su}Qgj!+pI5GoM4NW*H-;TG?gQ97flrv5@Jx>8#<+ zK+Hm|WR<8?l^oci#T+r=d3}>9178%+78Mly*;h7xDJ`OkEix>)ku31JDH^TY@@cpc z64kY5-zq+X?7{*2e^yrSA>NtZ3?~bZYQ@|ma=I;7dM}vp!z;YkyWuJQpKS6$Bn6y+ zf9igIyDmLvu;>~+T`q$zD!R8;@_4ryG)r*L^J)|g_{6U-*u=eKmW}53aFNY3TW>Z| zT!Ghz>&3iU@TDzT>pw_dVp*C-AO4%epE;%PdaEH+c9|FwXsZTY|lU*a+wANH-G9&-shp1_tCGpAIQx zKv&jZ)V80E&b1XVCJVmX1xiJjK=LQO@COwFJj%A+XOQ`=&j42J8>#JQ>jVpq!1=QX zc$Ccfu9M(@amV^EA^uNj`~UwYL<8E(lYYfP>sQZv((~I+v%a6Kum5y_3Tf7)%}D*W ztr3$K+}u@ggq9JUAWVk3hD?U~Sy|l=Kv+gWYe5J<{iFyc?J!#$^a39~@0fQ0T&!B? zNm=mHr~lZd$Q8ins=<%c+M9HP^@BoB&+SI~xsq|TP;5Q&wVDQpY#&E>1=^*rTvc#X zgyDX1l@Pf~%BKx`fJsvPb07n9K0;c3DJVgEicU%qfnA^z1M?w>dqNDBMf&_7xt6ad zbvp`h=jhVwRDXo@Aa3R~Xj~%Z$McBC@1p6!GSzPlId#Q~HlWc>^!C_OjaN-9GWq&c);Wjk z({E1>6n@mdt&VfU#KziyK4hli;`U%;gt@Vf91 zX9ghL5AK1RYX480VS76gkb44DE;m2@>!QEJ3*PJhJVo-9xGMM6^L)2s^kGoPwYJz~ zgmCu5f4W@&U-Ir*#$RuYgIE>W%A^@UHqIPxXhCAgZ(n`=B>oK9VwcNKOUTyd7oNhWGm#=)2pDw>2 zF2CRFUFh}=4=`oe^f+t}EY5K)WToW46S`(L#l&uFN>)Otm^El0%NbBmA;vmrTA&F7 zjI_qmqOKp?D8vZa-XQ;+)@Ihu0HZ#{N!o2~I~RjP=bj%H{mIjcO}5M_Wb9J*%g*MM0`M3S>BuIn{HdP8^^8Ab#AQMr8E0~4Ct zo%6_QSOek5y1}bwTR7#CPmx(dU#^n`K^}_W%^;GD4%2hZE!+$%h##uIv|HGWNsrtqj! zD;dIHAC+)@wv}kA!UD$cu=!H9VkHiBYo(1l1JS#QVy)eWXCvC*peifGviZ~+&Ww4Y zKwWM*4d~ntFfps4Y-@LO)*ALrmLx8eq$E={J18uFL8dYxQ`TU(n0k_0&ZCK(u6oP) zw{nJ`*bh}*E$NbWh-oR-9$CtC4Fswux=*e?_#h>}T)O+R%(RbCje<{@4>`@}mLx`c zvpl-V7?wf#1X(;zy)Q_;*G{^4rT=(Gj+CYjoqlcj!TAkGPPY;Z?^f*&i z;)Vl+sRNHDw^jq^b}}Yg&g*Pr!gVd&z^WtuP*@c69e;th8x13_i_zWlzr&1`3cY+Q z+o{|JDIJ*kC^lH+H2403#V%i~t(I5EDtE3;^*;%vDF5`)*yrn>F~|Yzl3YfPN&)@C;PT*N3(M5R~fyH?GA%sESLF2-)d|>UK^)1(CHAjo$+vVOn z(_wjIT{3?!Qpx3&4^VS_6FibwJfvB;I?|Q9AI0}d9UEhugO#K^<-fUTHn2Wldt6$s z?nH}rY*?S2w2_F|=Ugk8GQOoTQ$#5&1cPJNzsxJOnZ46#tAW9YqtsT58JU(dn~rE7 zqe})*44ZpPt{mT@%RcncabMNDY1hyK>MzL!Ta@WVm7=}gTJ@XA5IrP&y1b?YL(`I? zY)WZ*%&je!{igWz>?ZXrO~bGTU+#t(ybcRVZmZ5TUb2lnKC-zVMibrKTD?pnc0&(0 zu-^3C&IGsjo6U`f*k$J7SCdHU_I8s=_w#1hL{{#(-fvv)O(B;nE0nwIDStE8vw8j) zms7I#;9`e@1q)-cTDn!Ant3m^z0o??iq5#Gp(9OI)tT@Gf$Zq@u~+joeGW5OiCIPL zdYPK?qYiay+;3NU8i$hO(a#+k$KI8imfQ6Q9v)gA?1sdx)ZB97G|M4WeI#9dBz3+2 zypbn4wmR<7d~8UOBG1OknqlIHgVkFrEogefhOI3l-+AR>#W!h?zbc&Mz9lKfpCKSt zFP-2gWG!ZqRx;$DEykoNvbcVRf73~5!=kE2F5k?_WG#?t^5J0L$P{F!IorIqEFuVBj0?4K9Ekhj~a-6 z72np@;Hj>?SR#eu&&B5loT6S#RM`8Jd!vA=Z{3IaGFKw@a9;0qw10@n%6$1tN<1`N z@R6Xj_7aV(M@-!8x*F;{JCPUICR6anOXVSnm(A!<@nP;y)l)+y1>f@HHHA0*^ZLu} zy9{fJ1k7Bb95!ik9p(pp(o3*Y>b=_3J}he_SibSr8gAo-5SSi})jb}iI7W%uH}m#L#Qfp?L7acRT_MvSTJwW=cX{!%p5PaAK*EC`qsWA^gEXGh%? z9P64){hLyTo*0%j-b%w$`cU-h_V+hg8FYG#uatMjKVhv8WaXUD-c0t2agsu&u&uB8 zdL`uLQ3%jX3J%X~z&Jab*T;?f zVZ7R0SlA{rP3@ZhnoZ#R{M*U4O7xvKB+9d!{P%jkdUR4fNN_nBSJZFgsUt)n5nX$$ zH*0(1(nYwHmO3|b{ey!y1o{-_>i5OU!Qic`rb7qVw>&=5xMTuv<&@2>);J&>}wXi4(7#}yEW5X9cRPiyOcXBuDt2K-4PSTFBHWu zTSY)DE7d*5saCp2EG@ zEd7O9F&eDLx`=H7GK`wPckzlnmqjs?;efL8P+1#Oa9_h4nkroXl2;FB{#}pD9Tmn= zV$u_Z(Mn6leH-^+c}lW;QgoG1i>~$3A@%a8T>CFbp8x`#)&P>N@#^o}-Z>hy8Fv6U z6puPp#depN^5*reWOcy^JD3m_s^05Vu`%%bJz_K_<`=t=g|S~hYs)GB(0r55f4@TjF=t&FNS^NY zza&WG15=&LS9mm;WO~kivAkK-U^->{OrTFHYsfbIkAX(Z1=1XXxzC+)W~3>W+kK`g z(y5D$pRS!rBt{|A*J0X>4#$|wtBKaE2&a-8nc6NrJ!uvJu1|!BH(LgdeMQb0khade z*x4Bb6xqyzUxM1bO}^yX~8~}XbZBy0S~mS3!?_PM)J|5mdiHc!KVdf zs-)63_YB2zr#>`4QPjKY)Ya>}{6Q&s(XOw(xXajodVbD%$)Ti=fRL1yD~uqHYfRXl zHiTyJ(Y0q=ojok|JCoF%Uiwu9Oywju<-9A)jaBI@-jdPtOQo|Iufb1p+ERAmt=m}~D}zmLqFZafxH+Do5wr6X@+G}uaFb1C9U5tGHt_}kx$ zKCjfY`En)_`PWo5X;7=1o=ce$bNTqCb5lnvSy{B8Nwv5x#&Ds`F2}H8OogMlZQ<%K|+WOLw zZ0&1Nbd9q9Ev{_rZQB)Bhfo2Dq1uG;F{7AOjk9K&deNe9_)o^&K1?g_wqxh6Y|f`# z5gW^I$w}eww^YX;COtfTva{=0LN+HVUEm1c{(!WhlTE#TWHbwdi_vmjuU6(=%ox3o z^0&Nx&32=^d5-gOg7K57k@9R8s;%G{dXzCLJ+EhIqup+H`b*eIa*6G=0b@R)voV`E zODXBWrO1f&lvTdWjiLFl^p;m?hBvhovyU>WnYzCD8C`RO=9($mD@ zT!EzD{q*Z;QBS!J6aD_5@4TjpTz{SXZ{ub8dz~9!TdKU`A zaQ|n?1Q4CZNP7JI?NhG6pC8IiT;uJ1@B4o{?Jp(xr3C*=ErGipTKD;G#ii=(9q Date: Tue, 17 Dec 2024 08:40:49 +0100 Subject: [PATCH 02/18] rephrase one paragraph based on feedback, minor formatting changes Signed-off-by: wrigleyDan --- _posts/2024-12-xx-hybrid-search-optimization.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 5cdfa7988..5cad9fa3e 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -116,7 +116,7 @@ These are the results running the test set of both query sets independently: | NDCG@10 | 0.24 | 0.23 | | Precision@10 | 0.27 | 0.24 | -We applied an 80/20 split on the query sets to have a training and test dataset for the upcoming optimization steps. For the baseline we used the test set to calculate the search metrics. Every optimization step uses the 80% training part of the query and the 20% test part for calculating and comparing the search metrics. +We applied an 80/20 split on the query sets to arrange for a training and test dataset. Every optimization step uses the queries of the training set whereas search metrics are calculated and compared for the test set. For the baseline, we calculated the metrics for the test set only since there is no actual training going on. These numbers are now the starting point for our optimization journey. We want to maximize these metrics and see how far we get when looking for the best global hybrid search configuration in the next step. @@ -124,11 +124,10 @@ These numbers are now the starting point for our optimization journey. We want t With that starting point we can set off to explore the parameter space that hybrid search offers us. Our global hybrid search optimization notebook tries out 66 parameter combinations for hybrid search with the following set: -* Normalization technique: [l2, min_max] -* Combination technique: [arithmetic_mean, harmonic_mean, geometric_mean] -* Keyword search weight: [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] -* Neural search weight: [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0] - +* Normalization technique: [`l2`, `min_max`] +* Combination technique: [`arithmetic_mean`, `harmonic_mean`, `geometric_mean`] +* Keyword search weight: [`0.0`, `0.1`, `0.2`, `0.3`, `0.4`, `0.5`, `0.6`, `0.7`, `0.8`, `0.9`, `1.0`] +* Neural search weight: [`1.0`, `0.9`, `0.8`, `0.7`, `0.6`, `0.5`, `0.4`, `0.3`, `0.2`, `0.1`, `0.0`] Neural and keyword search weights always add up to 1.0, so a keyword search weight of 0.1 automatically comes with a neural search weight of 0.9, a keyword search weight of 0.2 comes with a neural search weight of 0.8, etc. @@ -172,9 +171,9 @@ Here is a template of the temporary search pipelines we use for our hybrid searc } ``` -norm is the variable for the normalization technique, combi the variable for the combination technique, keywordness is the keyword search weight and neuralness is the neural search weight. +`norm` is the variable for the normalization technique, `combi` the variable for the combination technique, `keywordness` is the keyword search weight and `neuralness` is the neural search weight. -The neural part of the hybrid query is searching in a field with embeddings that were created based on the title of a product with the model all-MiniLM-L6-v2: +The neural part of the hybrid query is searching in a field with embeddings that were created based on the title of a product with the model `all-MiniLM-L6-v2`: ``` { From 349bcca75dfca415021ec60f76d39413318c47c7 Mon Sep 17 00:00:00 2001 From: wrigleyDan Date: Tue, 17 Dec 2024 14:22:35 +0100 Subject: [PATCH 03/18] =?UTF-8?q?corrected=20percentages,=20incorporated?= =?UTF-8?q?=20reviewdog=20=F0=9F=90=B6=20feedback?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: wrigleyDan --- .../2024-12-xx-hybrid-search-optimization.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 5cad9fa3e..f3128496f 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -15,7 +15,7 @@ meta_description: Tackle the optimization of hybrid search in a systematic way a [Hybrid search combines keyword and neural search to improve search relevance](https://opensearch.org/docs/latest/search-plugins/hybrid-search) and this combination shows promising results across industries and [in benchmarks](https://opensearch.org/blog/semantic-science-benchmarks/). -As of [OpenSearch 2.18 hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is linearly combining keyword search (e.g. match queries) with neural search (transforming queries to vector embeddings by using machine learning models). This combination is configured in a search pipeline. It defines the post processing of the result sets for keyword and neural search by normalizing the scores of each and then combining them with one of currently three available techniques (arithmetic, harmonic or geometric mean). +As of [OpenSearch 2.18 hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is linearly combining keyword search (for example match queries) with neural search (transforming queries to vector embeddings by using machine learning models). This combination is configured in a search pipeline. It defines the post processing of the result sets for keyword and neural search by normalizing the scores of each and then combining them with one of currently three available techniques (arithmetic, harmonic or geometric mean). This search pipeline configuration lets OpenSearch users define how to normalize the scores and how to weigh the result sets. @@ -31,7 +31,7 @@ However, there is a systematic way to arrive at this ideal set of parameters and To identify the best hybrid search configuration we treat this as a parameter optimization challenge. We know the values parameters can have, so we know what combinations exist: -* There are two [normalization techniques: l2 and min_max](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) +* There are two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) * There are three combination techniques: arithmetic mean, harmonic mean, geometric mean * The keyword and neural search weights are values in the range from 0 to 1. @@ -48,7 +48,7 @@ A query set is a collection of queries. Ideally, query sets contain a representa * Very frequent queries (head queries), but also queries that are used rarely (tail queries) * Queries that are important to the business -* Queries that express different user intent classes (e.g. searching for a product category, searching for product category + color, searching for a brand) +* Queries that express different user intent classes (such as searching for a product category, searching for product category + color, searching for a brand) * Other classes depending on the individual search application These different queries are best sourced from a query log that captures all queries your users send to your system. One way of sampling these efficiently is [Probability-Proportional-to-Size Sampling](https://opensourceconnections.com/blog/2022/10/13/how-to-succeed-with-explicit-relevance-evaluation-using-probability-proportional-to-size-sampling/) (PPTSS). This method can generate a frequency weighted sample. @@ -59,7 +59,7 @@ We will run each query in the query set against a baseline first to see how our Once a query set is available judgments come next. A judgment describes how relevant a particular document is for a given query. A judgment consists of three parts: the query, the document, and a (typically) numerical rating. -Ratings can be binary (0 or 1, i.e. irrelevant or relevant) or graded (e.g. 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, there are human raters going through query-document pairs and assigning these ratings according to some rules. On the other hand there are implicit judgments. Implicit judgments are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple clickthrough rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. +Ratings can be binary (0 or 1, that is irrelevant or relevant) or graded (for example 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, there are human raters going through query-document pairs and assigning these ratings according to some rules. On the other hand there are implicit judgments. Implicit judgments are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple click through rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. Recently, a third category of generating judgments emerged: LLM-as-a-judge. Here you use large language models like GPT-4o to judge query-doc pairs. @@ -69,7 +69,7 @@ Implicit judgments have the advantage of scale: when already collecting user eve ## Search metrics -With a query set and the corresponding judgments we can calculate search metrics. Widely used [search metrics are Precision, DCG or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). +With a query set and the corresponding judgments we can calculate search metrics. Widely used [search metrics are Precision, DCG, or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). Search metrics provide a way of measuring the search result quality of a search system numerically. We calculate search metrics for each configuration and this enables us to compare them objectively against each other. As a result we know which configuration scored best. @@ -129,7 +129,7 @@ With that starting point we can set off to explore the parameter space that hybr * Keyword search weight: [`0.0`, `0.1`, `0.2`, `0.3`, `0.4`, `0.5`, `0.6`, `0.7`, `0.8`, `0.9`, `1.0`] * Neural search weight: [`1.0`, `0.9`, `0.8`, `0.7`, `0.6`, `0.5`, `0.4`, `0.3`, `0.2`, `0.1`, `0.0`] -Neural and keyword search weights always add up to 1.0, so a keyword search weight of 0.1 automatically comes with a neural search weight of 0.9, a keyword search weight of 0.2 comes with a neural search weight of 0.8, etc. +Neural and keyword search weights always add up to 1.0, so a keyword search weight of 0.1 automatically comes with a neural search weight of 0.9, a keyword search weight of 0.2 comes with a neural search weight of 0.8, ... This leaves us with 66 combinations to test: 2 normalization techniques * 3 combination techniques * 11 keyword/neural search weight combinations. @@ -231,9 +231,9 @@ We call this approach to identify a suitable configuration individually per hybr You may ask: why predict only the “neuralness” and none of the other parameter values? The results of the global hybrid search optimizer (large query set) showed us that the majority of search configurations share two parameter values: the l2 normalization technique and the arithmetic mean as the combination technique. -Looking at the top 5 configurations per search metric (DCG@10, NDCG@10 and Precision@10) only five out of the 15 pipelines have min_max as an alternative normalization technique and none of these configurations has another combination technique. +Looking at the top 5 configurations per search metric (DCG@10, NDCG@10 and Precision@10) only five out of the 15 pipelines have `min_max` as an alternative normalization technique and none of these configurations has another combination technique. -With that knowledge we assume the l2 normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. +With that knowledge we assume the `l2` normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. That leaves us with the parameter values for the neural search weight and the keyword search weight. By predicting one we can calculate the other by subtracting the prediction from 1: by predicting the “neuralness” we can calculate the “keywordness” by 1 - “neuralness”. @@ -277,9 +277,9 @@ With the appropriate data at hand we explored different algorithms and experimen We went for two relatively simple algorithms: linear regression and random forest regression. We applied cross validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. -**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (i.e. when comparing the RMSE scores within one cross validation run for one feature combination). +**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (that is when comparing the RMSE scores within one cross validation run for one feature combination). -**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 vs. 0.22 for the best linear regression model (large dataset) - both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. +**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset) - both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. **Feature combinations of all groups have the lowest RMSE**: the lowest error scores can be achieved when combining features from all three feature groups (query, keyword search result, neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with keyword search result feature combinations only serves as the best alternative. @@ -314,7 +314,7 @@ Metrics for the large dataset: | NDCG@10 | 0.23 | 0.25 | 0.27 | 0.27 | | Precision@10 | 0.24 | 0.27 | 0.29 | 0.29 | -Looking at these numbers shows us a steady positive trend starting from the baseline going all the way to the dynamic predictions of keywordness and neuralness per query. The large dataset shows a DCG increase of 8.9% rising from 9.3 to 10.13, the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 7.4%for the large dataset, 10.3% for the small dataset, Precision shows an improvement of 8% for the large dataset and 7.7% for the small dataset. +Looking at these numbers shows us a steady positive trend starting from the baseline going all the way to the dynamic predictions of keywordness and neuralness per query. The large dataset shows a DCG increase of 8.9% rising from 9.3 to 10.13, the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 8% for the large dataset, 7.7% for the small dataset, Precision shows an improvement of 7.4% for the large dataset and 10.3% for the small dataset. Interestingly, both models score exactly equally. The reason for this is that while they both predict different NDCG values, they predict the best ones with the same “neuralness” as an input feature. So while the models may differ in RMSE scores during the evaluation phase they provide equal results when applied to the test set. From 5891b99769e341d631099cd2a1f4c772c9a0e1ee Mon Sep 17 00:00:00 2001 From: wrigleyDan Date: Thu, 19 Dec 2024 13:39:20 +0100 Subject: [PATCH 04/18] integrating Stavros' feedback in the PR Signed-off-by: wrigleyDan --- .../2024-12-xx-hybrid-search-optimization.md | 106 +++++++++--------- 1 file changed, 53 insertions(+), 53 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index f3128496f..ba209e2ce 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -7,33 +7,33 @@ date: 2024-12-xx categories: - technical-posts - community -meta_keywords: hybrid query, hybrid search, neural query, keyword search, search relevancy, search result quality optimization +meta_keywords: hybrid query, hybrid search, neural query, lexical search, search relevancy, search result quality optimization meta_description: Tackle the optimization of hybrid search in a systematic way and train models that dynamically predict the best way to run hybrid search in your search application. --- # Introduction -[Hybrid search combines keyword and neural search to improve search relevance](https://opensearch.org/docs/latest/search-plugins/hybrid-search) and this combination shows promising results across industries and [in benchmarks](https://opensearch.org/blog/semantic-science-benchmarks/). +[Hybrid search combines lexical and neural search to improve search relevance](https://opensearch.org/docs/latest/search-plugins/hybrid-search); this combination shows promising results across industries and [in benchmarks](https://opensearch.org/blog/semantic-science-benchmarks/). -As of [OpenSearch 2.18 hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is linearly combining keyword search (for example match queries) with neural search (transforming queries to vector embeddings by using machine learning models). This combination is configured in a search pipeline. It defines the post processing of the result sets for keyword and neural search by normalizing the scores of each and then combining them with one of currently three available techniques (arithmetic, harmonic or geometric mean). +In OpenSearch 2.18, [hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is a linear combination of the lexical (match query) and neural (kNN) search scores. It first normalizes the scores and then combines them with one of three techniques (arithmetic, harmonic or geometric mean), each of which includes weighting parameters. -This search pipeline configuration lets OpenSearch users define how to normalize the scores and how to weigh the result sets. +The search pipeline configuration is how OpenSearch users define score normalization, combination, and weighting. # Finding the right hybrid search configuration is hard -As an OpenSearch user this leads to the ultimate question: which parameter set is the best for me and my application(s)? Or more concretely: what best normalization technique should I use and how much neural/keyword is ideal? +The question for a user of hybrid search in OpenSearch is how to choose the normalization and combination techniques and the weighting parameters for their application. -Unfortunately, there exists no one-size-fits-all solution. If there was the one best configuration there wouldn’t be a need to provide any options, right? The best configuration depends on a plethora of factors related to any given search application’s data, users, or domain. +What is best depends strongly on the corpus, on user behavior, and on the application domain – there is no one-size-fits-all solution. -However, there is a systematic way to arrive at this ideal set of parameters and even go beyond that. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set *globally* for all incoming queries. We will cover this approach first before moving on to a dynamic approach that identifies hybrid query parameters individually per query. +However, there is a systematic way to arrive at this ideal set of parameters. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set for all incoming queries; it is “global” because it doesn’t depend on per-query factors. We will cover this approach first before moving on to a dynamic approach that identifies hybrid query parameters individually per query. # Global hybrid search optimizer -To identify the best hybrid search configuration we treat this as a parameter optimization challenge. We know the values parameters can have, so we know what combinations exist: +We treat hybrid search configuration as a parameter optimization challenge. The parameters and combinations are: -* There are two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) -* There are three combination techniques: arithmetic mean, harmonic mean, geometric mean -* The keyword and neural search weights are values in the range from 0 to 1. +* Two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) +* Three combination techniques: arithmetic mean, harmonic mean, geometric mean +* The lexical and neural search weights, values in the range from 0 to 1. With this knowledge we can define a collection of parameter combinations to try out and compare to each other. To follow this path we need three things: @@ -48,7 +48,7 @@ A query set is a collection of queries. Ideally, query sets contain a representa * Very frequent queries (head queries), but also queries that are used rarely (tail queries) * Queries that are important to the business -* Queries that express different user intent classes (such as searching for a product category, searching for product category + color, searching for a brand) +* Queries that express different user intent classes (e.g. searching for a product category, searching for product category \+ color, searching for a brand) * Other classes depending on the individual search application These different queries are best sourced from a query log that captures all queries your users send to your system. One way of sampling these efficiently is [Probability-Proportional-to-Size Sampling](https://opensourceconnections.com/blog/2022/10/13/how-to-succeed-with-explicit-relevance-evaluation-using-probability-proportional-to-size-sampling/) (PPTSS). This method can generate a frequency weighted sample. @@ -57,19 +57,19 @@ We will run each query in the query set against a baseline first to see how our ## Judgments -Once a query set is available judgments come next. A judgment describes how relevant a particular document is for a given query. A judgment consists of three parts: the query, the document, and a (typically) numerical rating. +Once a query set is available, judgments come next. A judgment describes how relevant a particular document is for a given query. A judgment consists of three parts: the query, the document, and a (typically) numerical rating. -Ratings can be binary (0 or 1, that is irrelevant or relevant) or graded (for example 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, there are human raters going through query-document pairs and assigning these ratings according to some rules. On the other hand there are implicit judgments. Implicit judgments are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple click through rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. +Ratings can be binary (0 or 1, i.e. irrelevant or relevant) or graded (e.g. 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, human raters go through query-document pairs and assign these ratings. Implicit judgments, on the other hand, are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple clickthrough rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. -Recently, a third category of generating judgments emerged: LLM-as-a-judge. Here you use large language models like GPT-4o to judge query-doc pairs. +Recently, a third category of generating judgments emerged: LLM-as-a-judge. Here a large language model like GPT-4o judges query-doc pairs. -All three categories have different strengths and weaknesses. Whichever you choose, make sure you have a decent amount of judgments. Twice the depth of your default search result page per query is usually a good starting point for explicit judgments. So if you show your users 24 results per result page you should rate the first 48 results for each query. +All three categories have different strengths and weaknesses. Whichever you choose, you need to have a decent amount of judgments. Twice the depth of your default search result page per query is usually a good starting point for explicit judgments. So if you show your users 24 results per result page you should rate the first 48 results for each query. -Implicit judgments have the advantage of scale: when already collecting user events (like queries, viewed documents and clicked documents) this is an enabling step for calculating 1,000s of judgments by modeling these events into judgments. +Implicit judgments have the advantage of scale: when already collecting user events (like queries, viewed documents and clicked documents) this is an enabling step for calculating 1,000s of judgments by modeling these events as judgments. ## Search metrics -With a query set and the corresponding judgments we can calculate search metrics. Widely used [search metrics are Precision, DCG, or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). +With a query set and the corresponding judgments we can calculate search metrics. Widely used [search metrics are Precision, DCG or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). Search metrics provide a way of measuring the search result quality of a search system numerically. We calculate search metrics for each configuration and this enables us to compare them objectively against each other. As a result we know which configuration scored best. @@ -77,7 +77,7 @@ If you’re looking for guidance and support to generate a query set, create imp ## Create a baseline with the ESCI Dataset -Let’s put all pieces together and calculate search metrics for one particular example: in the [hybrid search optimizer repository](https://github.com/o19s/opensearch-hybrid-search-optimization/) we use the [ESCI dataset](https://github.com/amazon-science/esci-data) and in [notebooks 1-3](https://github.com/o19s/opensearch-hybrid-search-optimization/tree/main/notebooks) we configure OpenSearch to run hybrid queries, index the products of the ESCI dataset, create a query set and execute each of the queries in a keyword search setting that we assume to be our baseline. The search metrics can be calculated as the ESCI dataset comes not only with products and queries but also with judgments. +Let’s put all pieces together and calculate search metrics for one particular example: in the [hybrid search optimizer repository](https://github.com/o19s/opensearch-hybrid-search-optimization/) we use the [ESCI dataset](https://github.com/amazon-science/esci-data) and in [notebooks 1-3](https://github.com/o19s/opensearch-hybrid-search-optimization/tree/main/notebooks) we configure OpenSearch to run hybrid queries, index the products of the ESCI dataset, create a query set and execute each of the queries in a lexical search setting that we assume to be our baseline. The search metrics can be calculated as the ESCI dataset comes not only with products and queries but also with judgments. We chose a `multi_match` query of the type `best_fields` as our baseline. We search in the different fields of the dataset with “best guess” fields weights. In a real-world scenario we recommend techniques like learning to boost based on Bayesian optimization to figure out the best field and field weight combination. @@ -122,16 +122,16 @@ These numbers are now the starting point for our optimization journey. We want t ## Identifying the best hybrid search configuration -With that starting point we can set off to explore the parameter space that hybrid search offers us. Our global hybrid search optimization notebook tries out 66 parameter combinations for hybrid search with the following set: +With that starting point, we can explore the parameter space that hybrid search offers us. Our global hybrid search optimization notebook tries out 66 parameter combinations for hybrid search with the following set: * Normalization technique: [`l2`, `min_max`] * Combination technique: [`arithmetic_mean`, `harmonic_mean`, `geometric_mean`] -* Keyword search weight: [`0.0`, `0.1`, `0.2`, `0.3`, `0.4`, `0.5`, `0.6`, `0.7`, `0.8`, `0.9`, `1.0`] +* Lexical search weight: [`0.0`, `0.1`, `0.2`, `0.3`, `0.4`, `0.5`, `0.6`, `0.7`, `0.8`, `0.9`, `1.0`] * Neural search weight: [`1.0`, `0.9`, `0.8`, `0.7`, `0.6`, `0.5`, `0.4`, `0.3`, `0.2`, `0.1`, `0.0`] -Neural and keyword search weights always add up to 1.0, so a keyword search weight of 0.1 automatically comes with a neural search weight of 0.9, a keyword search weight of 0.2 comes with a neural search weight of 0.8, ... +Neural and lexical search weights always add up to 1.0, so of course we don’t need to choose them independently. -This leaves us with 66 combinations to test: 2 normalization techniques * 3 combination techniques * 11 keyword/neural search weight combinations. +This leaves us with 66 combinations to test: 2 normalization techniques * 3 combination techniques * 11 lexical/neural search weight combinations. For each of these combinations we run the queries of the training set. To do so we use OpenSearch’s [temporary search pipeline capability](https://opensearch.org/docs/latest/search-plugins/search-pipelines/using-search-pipeline/#using-a-temporary-search-pipeline-for-a-request) that saves us from pre-creating all pipelines for the 66 parameter combinations. @@ -160,7 +160,7 @@ Here is a template of the temporary search pipelines we use for our hybrid searc "technique": combi, "parameters": { "weights": [ - keywordness, + lexicalness, neuralness ] } @@ -171,7 +171,7 @@ Here is a template of the temporary search pipelines we use for our hybrid searc } ``` -`norm` is the variable for the normalization technique, `combi` the variable for the combination technique, `keywordness` is the keyword search weight and `neuralness` is the neural search weight. +`norm` is the variable for the normalization technique, `combi` the variable for the combination technique, `lexicalness` is the lexical search weight and `neuralness` is the neural search weight. The neural part of the hybrid query is searching in a field with embeddings that were created based on the title of a product with the model `all-MiniLM-L6-v2`: @@ -186,7 +186,7 @@ The neural part of the hybrid query is searching in a field with embeddings that } ``` -Using the queries of the training dataset and retrieving the results we calculate the three search metrics DCG@10, NDCG@10 and Precision@10,. For the small dataset there is one pipeline configuration that scores best for all three metrics. The pipeline uses the l2 norm, arithmetic mean, a keyword search weight of 0.4 and a neural search weight of 0.6. +Using the queries of the training dataset and retrieving the results, we calculate the three search metrics DCG@10, NDCG@10 and Precision@10. For the small dataset, there is one pipeline configuration that scores best for all three metrics. The pipeline uses the l2 norm, arithmetic mean, a lexical search weight of 0.4 and a neural search weight of 0.6. The following metrics are calculated: @@ -202,7 +202,7 @@ Applying the potentially best hybrid search parameter combination to the test se | NDCG@10 | 0.24 | 0.26 | 0.23 | 0.25 | | Precision@10 | 0.27 | 0.29 | 0.24 | 0.27 | -Looking at these numbers we can see improvements across all metrics for both datasets. To recap, at this point we did the following steps: +We can see improvements across all metrics for both datasets. To recap, up to here, we did the following: * Create a query set by randomly sampling * Generate judgments (to be precise, we only used the existing judgments of the ESCI dataset) @@ -212,39 +212,39 @@ Looking at these numbers we can see improvements across all metrics for both dat Two things are important to note: -* While the systematic approach can be transferred to other applications, the experiment results cannot! It is necessary to always evaluate and experiment with your own data. +* While the systematic approach can be transferred to other applications, the experiment results cannot\! It is necessary to always evaluate and experiment with your own data. * The ESCI dataset does not have 100% coverage of the judgments. On average we saw roughly 35% judgment coverage among the top 10 retrieved results per query. This leaves us with some uncertainty. -The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above mentioned parameter values. But of course there are queries that are winners and queries that are losers when doing this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average not every query will profit from the configuration. +The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above parameter values. But of course there are queries that are winners and queries that are losers when doing this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average, not every query will profit from the configuration. -The following chart shows the DCG@10 values of the training queries of the small query set. The x-axis represents the search pipeline with l2 norm, arithmetic mean, 0.1 keyword search weight and 0.9 neural search weight (configuration A). The y-axis represents the search pipeline with identical normalization and combination technique but switched weights: 0.9 keyword search weight, 0.1 neural search weight (configuration B). +The following chart shows the DCG@10 values of the training queries of the small query set. The x-axis represents the search pipeline with l2 norm, arithmetic mean, 0.1 lexical search weight and 0.9 neural search weight (configuration A). The y-axis represents the search pipeline with identical normalization and combination technique but switched weights: 0.9 lexical search weight, 0.1 neural search weight (configuration B). -Scatter Plot of DCG values for Keyword-heavy search configuration and Neural-heavy search configuration{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} +Scatter Plot of DCG values for lexical-heavy search configuration and Neural-heavy search configuration{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} The clearest winners of configuration B are those that are located on the y-axis: they have a DCG score of 0 for this configuration. And for configuration A some even score above 15. -As we strive for having winners only this now leads us to the question: improvements on average are fine but how can we tackle this even more targeted and come up with an approach that provides us the best configuration per-query instead of one good configuration for all queries? +As we strive for having winners only this now leads us to the question: improvements on average are fine but how can we tackle this in a more targeted way to come up with an approach that provides us the best configuration per-query instead of one good configuration for all queries? # Dynamic hybrid search optimizer -We call this approach to identify a suitable configuration individually per hybrid search query *dynamic hybrid search optimization*. To move in that direction we treat hybrid search as a query understanding challenge: by understanding certain features of the query we develop an approach to predict the “neuralness” of a query. “Neuralness” is used as the term describing the neural search weight for the hybrid search queries. +We call identifying a suitable configuration individually per hybrid search query *dynamic hybrid search optimization*. To move in that direction we treat hybrid search as a query understanding challenge: by understanding certain features of the query we develop an approach to predict the “neuralness” of a query. “Neuralness” is used as the term describing the neural search weight for the hybrid search queries. You may ask: why predict only the “neuralness” and none of the other parameter values? The results of the global hybrid search optimizer (large query set) showed us that the majority of search configurations share two parameter values: the l2 normalization technique and the arithmetic mean as the combination technique. Looking at the top 5 configurations per search metric (DCG@10, NDCG@10 and Precision@10) only five out of the 15 pipelines have `min_max` as an alternative normalization technique and none of these configurations has another combination technique. -With that knowledge we assume the `l2` normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. +With that knowledge we assume the l2 normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. -That leaves us with the parameter values for the neural search weight and the keyword search weight. By predicting one we can calculate the other by subtracting the prediction from 1: by predicting the “neuralness” we can calculate the “keywordness” by 1 - “neuralness”. +That leaves us with the parameter values for the neural search weight and the lexical search weight. By predicting one we can calculate the other by subtracting the prediction from 1: by predicting the “neuralness” we can calculate the “lexicalness” by 1 - “neuralness”. To validate our hypothesis that we came up with a couple of feature groups and features within these groups. Afterwards we trained machine learning models to predict an expected NDCG value for a given “neuralness” of a query. ## Feature groups and features -We divide the features into three groups: query features, keyword search result features and neural search result features: +We divide the features into three groups: query features, lexical search result features and neural search result features: * Query features: these features describe the user query string. -* Keyword search result features: these features describe the results that the user query retrieves when executed as a keyword search. +* Lexical search result features: these features describe the results that the user query retrieves when executed as a lexical search. * Neural search result features: these features describe the results that the user query retrieves as a neural search. ### Query features @@ -254,16 +254,16 @@ We divide the features into three groups: query features, keyword search result * Contains number: does the query contain one or more numbers? * Contains special character: does the query contain one or more special characters (non-alphanumeric characters)? -### Keyword search result features +### Lexical search result features -* Number of results: the number of results for the keyword query. +* Number of results: the number of results for the lexical query. * Maximum title score: the maximum score of the titles of the retrieved top 10 documents. The scores are BM25 scores calculated individually per result set. That means that the BM25 score is not calculated on the whole index but only on the retrieved subset for the query, making the scores more comparable to each other and less prone to outliers that could result from high IDF values for very rare query terms. * Sum of title scores: the sum of the title scores of the top 10 documents, again calculated per-result set. We use the sum of the scores (and no average value) as an aggregate to have a measure of how relevant all retrieved top 10 titles are. BM25 scores are not normalized so using the sum instead of the average seemed reasonable. ### Neural search result features * Maximum semantic score: the maximum semantic score of the retrieved top 10 documents. This is the score we receive for a neural query based on the query’s similarity to the title. -* Average semantic score: By contrast to BM 25 scores the semantic scores are normalized and in the range of 0 to 1. Using the average score seems more reasonable than going for the sum here. +* Average semantic score: By contrast to BM 25 scores the semantic scores are normalized and in the range of 0 to 1\. Using the average score seems more reasonable than going for the sum here. ## Feature engineering @@ -277,15 +277,15 @@ With the appropriate data at hand we explored different algorithms and experimen We went for two relatively simple algorithms: linear regression and random forest regression. We applied cross validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. -**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (that is when comparing the RMSE scores within one cross validation run for one feature combination). +**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (i.e. when comparing the RMSE scores within one cross validation run for one feature combination). -**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset) - both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. +**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 vs. 0.22 for the best linear regression model (large dataset) \- both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. -**Feature combinations of all groups have the lowest RMSE**: the lowest error scores can be achieved when combining features from all three feature groups (query, keyword search result, neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with keyword search result feature combinations only serves as the best alternative. +**Feature combinations of all groups have the lowest RMSE**: the lowest error scores can be achieved when combining features from all three feature groups (query, lexical search result, neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with lexical search result feature combinations only serves as the best alternative. -This is particularly interesting when thinking about productionizing this: putting an approach like this in production means that features need to be calculated per query during query time. Getting keyword search result features and neural search result features requires running these queries which would add significant latency to the overall query even prior to inference time. +This is particularly interesting when thinking about productionizing this: putting an approach like this in production means that features need to be calculated per query during query time. Getting lexical search result features and neural search result features requires running these queries which would add significant latency to the overall query even prior to inference time. -The following picture shows the distribution of RMSE scores within one cross validation run when fitting random forest regression models with feature combinations within one group (blue: neural search features, red: keyword result features, green: query features) and across the groups (purple: features from all groups). The feature mix (purple) scores lowest (best), followed by training on keyword search result features only (red). +The following picture shows the distribution of RMSE scores within one cross validation run when fitting random forest regression models with feature combinations within one group (blue: neural search features, red: lexical result features, green: query features) and across the groups (purple: features from all groups). The feature mix (purple) scores lowest (best), followed by training on lexical search result features only (red). Box plot showing the distribution of RMSE scores within one cross validation run when fitting the random forest regression model{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} The overall picture does not change when looking at the numbers for the linear model: @@ -294,7 +294,7 @@ The overall picture does not change when looking at the numbers for the linear m ## Model testing Let’s look how the trained models perform when applying them dynamically on our test set. -For each query of the test set we engineer the features and let the model make the inference for the “neuralness” values between 0.0 and 1.0, since “neuralness” also is a feature that we pass into the model. We then take the neuralness value that resulted in the highest prediction which is the best NDCG value. By knowing the “neuralness” we can calculate the “keywordness” by subtracting the “neuralness” from 1. +For each query of the test set we engineer the features and let the model make the inference for the “neuralness” values between 0.0 and 1.0, since “neuralness” also is a feature that we pass into the model. We then take the neuralness value that resulted in the highest prediction which is the best NDCG value. By knowing the “neuralness” we can calculate the “lexicalness” by subtracting the “neuralness” from 1. We again use the l2 norm and arithmetic mean as our hybrid search normalization and combination parameter values as they scored best in the global hybrid search optimizer experiment. With that we build the hybrid query, execute it, retrieve the results and calculate the search metrics like in the baseline and global hybrid search optimizer. @@ -314,11 +314,11 @@ Metrics for the large dataset: | NDCG@10 | 0.23 | 0.25 | 0.27 | 0.27 | | Precision@10 | 0.24 | 0.27 | 0.29 | 0.29 | -Looking at these numbers shows us a steady positive trend starting from the baseline going all the way to the dynamic predictions of keywordness and neuralness per query. The large dataset shows a DCG increase of 8.9% rising from 9.3 to 10.13, the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 8% for the large dataset, 7.7% for the small dataset, Precision shows an improvement of 7.4% for the large dataset and 10.3% for the small dataset. +Looking at these numbers shows us a steady positive trend starting from the baseline going all the way to the dynamic predictions of lexicalness and neuralness per query. The large dataset shows a DCG increase of 8.9% rising from 9.3 to 10.13, the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 7.4%for the large dataset, 10.3% for the small dataset, Precision shows an improvement of 8% for the large dataset and 7.7% for the small dataset. -Interestingly, both models score exactly equally. The reason for this is that while they both predict different NDCG values, they predict the best ones with the same “neuralness” as an input feature. So while the models may differ in RMSE scores during the evaluation phase they provide equal results when applied to the test set. +Interestingly, both models score exactly equally. The reason for this is that while they both predict different NDCG values, they predict the best ones with the same “neuralness” as an input feature. So while the models may differ in RMSE scores during the evaluation phase, they provide equal results when applied to the test set. -Despite the low judgement coverage we see improvements for all metrics. This gives us confidence that this approach can provide value for search systems not only switching from keyword to hybrid search but also those who already are in production but have never used any systematic process to evaluate and identify the best settings. +Despite the low judgement coverage we see improvements for all metrics. This gives us confidence that this approach can provide value for search systems not only switching from lexical to hybrid search but also those who already are in production but have never used any systematic process to evaluate and identify the best settings. # Conclusion @@ -328,13 +328,13 @@ We encourage everyone to adopt the approach and explore its usefulness in their # Future work -The currently planned next steps include replicating the approach with a dataset that has a higher judgment coverage and covers a different domain to see its generalizability. +The currently planned next steps include replicating the approach with a dataset that has higher judgment coverage and covers a different domain to see its generalizability. -Optimizing hybrid search typically is not the first step in search result quality optimization. Optimizing keyword search results first is especially important as the keyword search query is part of the hybrid search query. Bayesian optimization is an efficient technique to efficiently identify the best set of fields and field weights, sometimes also referred to as learning to boost. +Optimizing hybrid search typically is not the first step in search result quality optimization. Optimizing lexical search results first is especially important as the lexical search query is part of the hybrid search query. Bayesian optimization is an efficient technique to efficiently identify the best set of fields and field weights, sometimes also referred to as learning to boost. -The straight forward approach of trying out 66 different combinations can be created more elegantly by applying a technique like Bayesian optimization as well. In particular for large search indexes and a large amount of queries we expect this to result in a performance improvement. +The straightforward approach of trying out 66 different combinations can be created more elegantly by applying a technique like Bayesian optimization as well. In particular for large search indexes and a large amount of queries we expect this to result in a performance improvement. -Reciprocal rank fusion is another way of combining keyword search and neural search, currently under active development: +Reciprocal rank fusion is another way of combining lexical search and neural search, currently under active development: * [https://github.com/opensearch-project/neural-search/issues/865](https://github.com/opensearch-project/neural-search/issues/865) * [https://github.com/opensearch-project/neural-search/issues/659](https://github.com/opensearch-project/neural-search/issues/659) From 63c41629257947aab70aed2a107873aade287c03 Mon Sep 17 00:00:00 2001 From: wrigleyDan Date: Sun, 22 Dec 2024 14:56:55 +0100 Subject: [PATCH 05/18] integrate Stavros' final feedback and reviewdog check feedback. Signed-off-by: wrigleyDan --- _posts/2024-12-xx-hybrid-search-optimization.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index ba209e2ce..a8754a32f 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -15,7 +15,7 @@ meta_description: Tackle the optimization of hybrid search in a systematic way a [Hybrid search combines lexical and neural search to improve search relevance](https://opensearch.org/docs/latest/search-plugins/hybrid-search); this combination shows promising results across industries and [in benchmarks](https://opensearch.org/blog/semantic-science-benchmarks/). -In OpenSearch 2.18, [hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is a linear combination of the lexical (match query) and neural (kNN) search scores. It first normalizes the scores and then combines them with one of three techniques (arithmetic, harmonic or geometric mean), each of which includes weighting parameters. +In OpenSearch 2.18, [hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is an arithmetic combination of the lexical (match query) and neural (k-NN) search scores. It first normalizes the scores and then combines them with one of three techniques (arithmetic, harmonic or geometric mean), each of which includes weighting parameters. The search pipeline configuration is how OpenSearch users define score normalization, combination, and weighting. @@ -25,11 +25,11 @@ The question for a user of hybrid search in OpenSearch is how to choose the norm What is best depends strongly on the corpus, on user behavior, and on the application domain – there is no one-size-fits-all solution. -However, there is a systematic way to arrive at this ideal set of parameters. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set for all incoming queries; it is “global” because it doesn’t depend on per-query factors. We will cover this approach first before moving on to a dynamic approach that identifies hybrid query parameters individually per query. +However, there is a systematic way to arrive at this ideal set of parameters. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set for all incoming queries; it is “global” because it doesn’t depend on per-query factors. We will cover this approach first before moving on to a dynamic approach that takes into account per-query signals. # Global hybrid search optimizer -We treat hybrid search configuration as a parameter optimization challenge. The parameters and combinations are: +We treat hybrid search configuration as a parameter optimization problem. The parameters and combinations are: * Two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) * Three combination techniques: arithmetic mean, harmonic mean, geometric mean @@ -40,7 +40,7 @@ With this knowledge we can define a collection of parameter combinations to try 1. Query set: a collection of queries. 2. Judgments: a collection of ratings that tell how relevant a result for a given query is. -3. Search Metrics: a numeric expression of how well the search system does in returning relevant documents for queries +3. Search Quality Metrics: a numeric expression of how well the search system does in returning relevant documents for queries ## Query set @@ -48,7 +48,7 @@ A query set is a collection of queries. Ideally, query sets contain a representa * Very frequent queries (head queries), but also queries that are used rarely (tail queries) * Queries that are important to the business -* Queries that express different user intent classes (e.g. searching for a product category, searching for product category \+ color, searching for a brand) +* Queries that express different user intent classes (for example searching for a product category, searching for product category \+ color, searching for a brand) * Other classes depending on the individual search application These different queries are best sourced from a query log that captures all queries your users send to your system. One way of sampling these efficiently is [Probability-Proportional-to-Size Sampling](https://opensourceconnections.com/blog/2022/10/13/how-to-succeed-with-explicit-relevance-evaluation-using-probability-proportional-to-size-sampling/) (PPTSS). This method can generate a frequency weighted sample. @@ -59,7 +59,7 @@ We will run each query in the query set against a baseline first to see how our Once a query set is available, judgments come next. A judgment describes how relevant a particular document is for a given query. A judgment consists of three parts: the query, the document, and a (typically) numerical rating. -Ratings can be binary (0 or 1, i.e. irrelevant or relevant) or graded (e.g. 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, human raters go through query-document pairs and assign these ratings. Implicit judgments, on the other hand, are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple clickthrough rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. +Ratings can be binary (0 or 1, that is irrelevant or relevant) or graded (for example 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, human raters go through query-document pairs and assign these ratings. Implicit judgments, on the other hand, are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple clickthrough rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. Recently, a third category of generating judgments emerged: LLM-as-a-judge. Here a large language model like GPT-4o judges query-doc pairs. @@ -69,7 +69,7 @@ Implicit judgments have the advantage of scale: when already collecting user eve ## Search metrics -With a query set and the corresponding judgments we can calculate search metrics. Widely used [search metrics are Precision, DCG or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). +With a query set and the corresponding judgments we can calculate search quality metrics. Widely used [search metrics are Precision, DCG, or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). Search metrics provide a way of measuring the search result quality of a search system numerically. We calculate search metrics for each configuration and this enables us to compare them objectively against each other. As a result we know which configuration scored best. @@ -279,7 +279,7 @@ We applied cross validation, regularization, and tried out all different feature **Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (i.e. when comparing the RMSE scores within one cross validation run for one feature combination). -**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 vs. 0.22 for the best linear regression model (large dataset) \- both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. +**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset) \- both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. **Feature combinations of all groups have the lowest RMSE**: the lowest error scores can be achieved when combining features from all three feature groups (query, lexical search result, neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with lexical search result feature combinations only serves as the best alternative. From 6cc7b57607ea98758c2b1a8f391b41973f1e34ca Mon Sep 17 00:00:00 2001 From: wrigleyDan Date: Sun, 22 Dec 2024 15:00:28 +0100 Subject: [PATCH 06/18] integrate another reviewdog check correction Signed-off-by: wrigleyDan --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index a8754a32f..e6bd1f347 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -277,7 +277,7 @@ With the appropriate data at hand we explored different algorithms and experimen We went for two relatively simple algorithms: linear regression and random forest regression. We applied cross validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. -**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (i.e. when comparing the RMSE scores within one cross validation run for one feature combination). +**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (that is when comparing the RMSE scores within one cross validation run for one feature combination). **Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset) \- both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. From 817f78c84082886b0f1e7539770bf3af924baaab Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:07:51 +0100 Subject: [PATCH 07/18] Update _posts/2024-12-xx-hybrid-search-optimization.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index e6bd1f347..a32625524 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -1,6 +1,6 @@ --- layout: post -title: "Optimizing Hybrid Search in OpenSearch" +title: "Optimizing hybrid search in OpenSearch" authors: - dwrigley date: 2024-12-xx From ef8a227b27c87f1c266e2c381f3265252aa16462 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:08:17 +0100 Subject: [PATCH 08/18] Update _posts/2024-12-xx-hybrid-search-optimization.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index a32625524..25f821120 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -15,7 +15,7 @@ meta_description: Tackle the optimization of hybrid search in a systematic way a [Hybrid search combines lexical and neural search to improve search relevance](https://opensearch.org/docs/latest/search-plugins/hybrid-search); this combination shows promising results across industries and [in benchmarks](https://opensearch.org/blog/semantic-science-benchmarks/). -In OpenSearch 2.18, [hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is an arithmetic combination of the lexical (match query) and neural (k-NN) search scores. It first normalizes the scores and then combines them with one of three techniques (arithmetic, harmonic or geometric mean), each of which includes weighting parameters. +In OpenSearch 2.18, [hybrid search](https://opensearch.org/docs/latest/search-plugins/hybrid-search/) is an arithmetic combination of the lexical (match query) and neural (k-NN) search scores. It first normalizes the scores and then combines them with one of three techniques (arithmetic, harmonic, or geometric mean), each of which includes weighting parameters. The search pipeline configuration is how OpenSearch users define score normalization, combination, and weighting. From b65efb7be453c5589b219ed2bf465bc57b9f070c Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:08:35 +0100 Subject: [PATCH 09/18] Update _posts/2024-12-xx-hybrid-search-optimization.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 25f821120..646b481d4 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -19,7 +19,7 @@ In OpenSearch 2.18, [hybrid search](https://opensearch.org/docs/latest/search-pl The search pipeline configuration is how OpenSearch users define score normalization, combination, and weighting. -# Finding the right hybrid search configuration is hard +# Finding the right hybrid search configuration can be difficult The question for a user of hybrid search in OpenSearch is how to choose the normalization and combination techniques and the weighting parameters for their application. From 82f02250b2846137dc7b49d50e7502e6c47e1921 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:08:50 +0100 Subject: [PATCH 10/18] Update _posts/2024-12-xx-hybrid-search-optimization.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 646b481d4..94ccb8291 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -21,7 +21,7 @@ The search pipeline configuration is how OpenSearch users define score normaliza # Finding the right hybrid search configuration can be difficult -The question for a user of hybrid search in OpenSearch is how to choose the normalization and combination techniques and the weighting parameters for their application. +The primary question for a user of hybrid search in OpenSearch is how to choose the normalization and combination techniques and the weighting parameters for their application. What is best depends strongly on the corpus, on user behavior, and on the application domain – there is no one-size-fits-all solution. From dcf9ab0d023631ad5d8ea572d72e68194d4f0263 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:09:11 +0100 Subject: [PATCH 11/18] Update _posts/2024-12-xx-hybrid-search-optimization.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 94ccb8291..bb7f1145a 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -23,7 +23,7 @@ The search pipeline configuration is how OpenSearch users define score normaliza The primary question for a user of hybrid search in OpenSearch is how to choose the normalization and combination techniques and the weighting parameters for their application. -What is best depends strongly on the corpus, on user behavior, and on the application domain – there is no one-size-fits-all solution. +What is best depends strongly on the corpus, on user behavior, and on the application domain---there is no one-size-fits-all solution. However, there is a systematic way to arrive at this ideal set of parameters. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set for all incoming queries; it is “global” because it doesn’t depend on per-query factors. We will cover this approach first before moving on to a dynamic approach that takes into account per-query signals. From c42b938b0d349c67d7b8c11a6f2da5a46f0c1868 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:09:30 +0100 Subject: [PATCH 12/18] Update _posts/2024-12-xx-hybrid-search-optimization.md Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index bb7f1145a..ef2073e4b 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -328,7 +328,7 @@ We encourage everyone to adopt the approach and explore its usefulness in their # Future work -The currently planned next steps include replicating the approach with a dataset that has higher judgment coverage and covers a different domain to see its generalizability. +The currently planned next steps include replicating the approach with a dataset that has higher judgment coverage and covers a different domain in order to determine its generalizability. Optimizing hybrid search typically is not the first step in search result quality optimization. Optimizing lexical search results first is especially important as the lexical search query is part of the hybrid search query. Bayesian optimization is an efficient technique to efficiently identify the best set of fields and field weights, sometimes also referred to as learning to boost. From f396463ede225ac868852462c4327f74217a1aad Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:30:42 +0100 Subject: [PATCH 13/18] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- .../2024-12-xx-hybrid-search-optimization.md | 178 +++++++++--------- 1 file changed, 89 insertions(+), 89 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index ef2073e4b..920c3d56c 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -25,61 +25,61 @@ The primary question for a user of hybrid search in OpenSearch is how to choose What is best depends strongly on the corpus, on user behavior, and on the application domain---there is no one-size-fits-all solution. -However, there is a systematic way to arrive at this ideal set of parameters. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set for all incoming queries; it is “global” because it doesn’t depend on per-query factors. We will cover this approach first before moving on to a dynamic approach that takes into account per-query signals. +However, there is a systematic way to arrive at this ideal set of parameters. We call identifying the best set of parameters *global hybrid search optimization*: we identify the best parameter set for all incoming queries; it is "global" because it doesn't depend on per-query factors. We will cover this approach first before moving on to a dynamic approach that takes into account per-query signals. # Global hybrid search optimizer We treat hybrid search configuration as a parameter optimization problem. The parameters and combinations are: -* Two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/) -* Three combination techniques: arithmetic mean, harmonic mean, geometric mean +* Two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/). +* Three combination techniques: arithmetic mean, harmonic mean, geometric mean. * The lexical and neural search weights, values in the range from 0 to 1. -With this knowledge we can define a collection of parameter combinations to try out and compare to each other. To follow this path we need three things: +With this knowledge we can define a collection of parameter combinations to try out and compare. To follow this path we need three things: -1. Query set: a collection of queries. -2. Judgments: a collection of ratings that tell how relevant a result for a given query is. -3. Search Quality Metrics: a numeric expression of how well the search system does in returning relevant documents for queries +1. Query set: A collection of queries. +2. Judgments: A collection of ratings that indicate the relevance of a result for a given query. +3. Search quality metrics: A numeric expression of how well the search system performs in returning relevant documents for queries. ## Query set -A query set is a collection of queries. Ideally, query sets contain a representative set of queries. Representative means that different query classes are included in this query set: +A query set is a collection of queries. Ideally, query sets contain a representative set of queries. "Representative" means that different query classes are included in this query set: -* Very frequent queries (head queries), but also queries that are used rarely (tail queries) +* Very frequent queries (head queries) but also queries that are rarely used (tail queries) * Queries that are important to the business -* Queries that express different user intent classes (for example searching for a product category, searching for product category \+ color, searching for a brand) -* Other classes depending on the individual search application +* Queries that express different user intent classes (for example, searching for a product category, searching for product category \+ color, searching for a brand) +* Other classes, depending on the individual search application -These different queries are best sourced from a query log that captures all queries your users send to your system. One way of sampling these efficiently is [Probability-Proportional-to-Size Sampling](https://opensourceconnections.com/blog/2022/10/13/how-to-succeed-with-explicit-relevance-evaluation-using-probability-proportional-to-size-sampling/) (PPTSS). This method can generate a frequency weighted sample. +These different queries are best sourced from a query log that captures all queries your users send to your system. One way of sampling these efficiently is [Probability-Proportional-to-Size Sampling](https://opensourceconnections.com/blog/2022/10/13/how-to-succeed-with-explicit-relevance-evaluation-using-probability-proportional-to-size-sampling/) (PPTSS). This method can generate a frequency-weighted sample. -We will run each query in the query set against a baseline first to see how our search result quality is at the beginning of this experimentation phase. +We will first run each query in the query set against a baseline to determine our search result quality at the beginning of this experimentation phase. ## Judgments Once a query set is available, judgments come next. A judgment describes how relevant a particular document is for a given query. A judgment consists of three parts: the query, the document, and a (typically) numerical rating. -Ratings can be binary (0 or 1, that is irrelevant or relevant) or graded (for example 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, human raters go through query-document pairs and assign these ratings. Implicit judgments, on the other hand, are derived from user behavior: user queries, viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple clickthrough rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. +Ratings can be binary (0 or 1, that is, irrelevant or relevant) or graded (for example, 0 to 3, definitely irrelevant to definitely relevant). In the case of explicit judgments, human raters review query-document pairs and assign these ratings. Implicit judgments, on the other hand, are derived from user behavior: user queries and viewed and clicked documents. Implicit judgments can be modeled with [click models that emerged from web search](https://clickmodels.weebly.com/) in the early 2010s and range from simple click-through rates to more [complex approaches](https://www.youtube.com/watch?v=wa88XShl7hs). All come with limitations and/or deal differently with biases like position bias. -Recently, a third category of generating judgments emerged: LLM-as-a-judge. Here a large language model like GPT-4o judges query-doc pairs. +Recently, a third category of judgment generation has emerged: LLM-as-a-judge. Here a large language model like GPT-4o judges query-doc pairs. -All three categories have different strengths and weaknesses. Whichever you choose, you need to have a decent amount of judgments. Twice the depth of your default search result page per query is usually a good starting point for explicit judgments. So if you show your users 24 results per result page you should rate the first 48 results for each query. +All three categories have different strengths and weaknesses. Whichever you choose, you need to have a decent amount of judgments. Twice the depth of your default search result page per query is usually a good starting point for explicit judgments. So if you show your users 24 results per result page, you should rate the first 48 results for each query. -Implicit judgments have the advantage of scale: when already collecting user events (like queries, viewed documents and clicked documents) this is an enabling step for calculating 1,000s of judgments by modeling these events as judgments. +Implicit judgments have the advantage of scale: when already collecting user events (like queries, viewed documents, and clicked documents), this is an enabling step for calculating thousands of judgments by modeling these events as judgments. ## Search metrics -With a query set and the corresponding judgments we can calculate search quality metrics. Widely used [search metrics are Precision, DCG, or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). +With a query set and the corresponding judgments, we can calculate search quality metrics. Widely used [search metrics are Precision, DCG, or NDCG](https://opensourceconnections.com/blog/2020/02/28/choosing-your-search-relevance-metric/). -Search metrics provide a way of measuring the search result quality of a search system numerically. We calculate search metrics for each configuration and this enables us to compare them objectively against each other. As a result we know which configuration scored best. +Search metrics provide a way of measuring the search result quality of a search system numerically. We calculate search metrics for each configuration, and this enables us to compare them objectively against each other. As a result we know which configuration scored best. -If you’re looking for guidance and support to generate a query set, create implicit judgments based on user behavior signals or calculate metrics based on these, feel free to [check out the search result quality evaluation framework](https://github.com/o19s/opensearch-search-quality-evaluation/). +If you're looking for guidance and support in generating a query set, creating implicit judgments based on user behavior signals, or calculating metrics based on these signals, feel free to [check out the search result quality evaluation framework](https://github.com/o19s/opensearch-search-quality-evaluation/). -## Create a baseline with the ESCI Dataset +## Create a baseline with the ESCI dataset -Let’s put all pieces together and calculate search metrics for one particular example: in the [hybrid search optimizer repository](https://github.com/o19s/opensearch-hybrid-search-optimization/) we use the [ESCI dataset](https://github.com/amazon-science/esci-data) and in [notebooks 1-3](https://github.com/o19s/opensearch-hybrid-search-optimization/tree/main/notebooks) we configure OpenSearch to run hybrid queries, index the products of the ESCI dataset, create a query set and execute each of the queries in a lexical search setting that we assume to be our baseline. The search metrics can be calculated as the ESCI dataset comes not only with products and queries but also with judgments. +Let's put all the pieces together and calculate search metrics for one particular example: in the [hybrid search optimizer repository](https://github.com/o19s/opensearch-hybrid-search-optimization/) we use the [ESCI dataset](https://github.com/amazon-science/esci-data), and in [notebooks 1--3](https://github.com/o19s/opensearch-hybrid-search-optimization/tree/main/notebooks) we configure OpenSearch to run hybrid queries, index the products of the ESCI dataset, create a query set, and execute each of the queries in a lexical search setting that we assume to be our baseline. The search metrics can be calculated because the ESCI dataset comes not only with products and queries but also with judgments. -We chose a `multi_match` query of the type `best_fields` as our baseline. We search in the different fields of the dataset with “best guess” fields weights. In a real-world scenario we recommend techniques like learning to boost based on Bayesian optimization to figure out the best field and field weight combination. +We chose a `multi_match` query of the type `best_fields` as our baseline. We search in the different dataset fields with "best guess" fields weights. In a real-world scenario we recommend techniques like learning to boost based on Bayesian optimization to figure out the best field and field weight combination. ``` { @@ -106,34 +106,34 @@ We chose a `multi_match` query of the type `best_fields` as our baseline. We sea } ``` -To arrive at a query set we went with two random samples: a small one containing 250 queries, and a large one containing 5,000 queries. Unfortunately, the ESCI dataset does not have any information about the frequency of queries, which excludes frequency weighted approaches like the above mentioned PPTSS. +To arrive at a query set, we used two random samples: a small one containing 250 queries and a large one containing 5,000 queries. Unfortunately, the ESCI dataset does not contain any information about the frequency of queries, which excludes frequency-weighted approaches like the above-mentioned PPTSS. -These are the results running the test set of both query sets independently: +The following are the results of running the test set of both query sets independently. -| Metric | Baseline BM25 - Small | Baseline BM25 - Large | +| Metric | Baseline BM25 – Small | Baseline BM25 – Large | | :---: | :---: | :---: | | DCG@10 | 9.65 | 8.82 | | NDCG@10 | 0.24 | 0.23 | | Precision@10 | 0.27 | 0.24 | -We applied an 80/20 split on the query sets to arrange for a training and test dataset. Every optimization step uses the queries of the training set whereas search metrics are calculated and compared for the test set. For the baseline, we calculated the metrics for the test set only since there is no actual training going on. +We applied an 80/20 split on the query sets to arrange for a training and test dataset. Every optimization step uses the queries of the training set, whereas search metrics are calculated and compared for the test set. For the baseline, we calculated the metrics for only the test set because there is no actual training occurring. These numbers are now the starting point for our optimization journey. We want to maximize these metrics and see how far we get when looking for the best global hybrid search configuration in the next step. ## Identifying the best hybrid search configuration -With that starting point, we can explore the parameter space that hybrid search offers us. Our global hybrid search optimization notebook tries out 66 parameter combinations for hybrid search with the following set: +With this starting point, we can explore the parameter space that hybrid search offers. Our global hybrid search optimization notebook tries out 66 parameter combinations for hybrid search with the following set: * Normalization technique: [`l2`, `min_max`] * Combination technique: [`arithmetic_mean`, `harmonic_mean`, `geometric_mean`] * Lexical search weight: [`0.0`, `0.1`, `0.2`, `0.3`, `0.4`, `0.5`, `0.6`, `0.7`, `0.8`, `0.9`, `1.0`] * Neural search weight: [`1.0`, `0.9`, `0.8`, `0.7`, `0.6`, `0.5`, `0.4`, `0.3`, `0.2`, `0.1`, `0.0`] -Neural and lexical search weights always add up to 1.0, so of course we don’t need to choose them independently. +Neural and lexical search weights always add up to 1.0, so we don't need to choose them independently. This leaves us with 66 combinations to test: 2 normalization techniques * 3 combination techniques * 11 lexical/neural search weight combinations. -For each of these combinations we run the queries of the training set. To do so we use OpenSearch’s [temporary search pipeline capability](https://opensearch.org/docs/latest/search-plugins/search-pipelines/using-search-pipeline/#using-a-temporary-search-pipeline-for-a-request) that saves us from pre-creating all pipelines for the 66 parameter combinations. +For each of these combinations, we run the queries of the training set. To do so we use OpenSearch's [temporary search pipeline capability](https://opensearch.org/docs/latest/search-plugins/search-pipelines/using-search-pipeline/#using-a-temporary-search-pipeline-for-a-request), making it unnecessary to pre-create all pipelines for the 66 parameter combinations. Here is a template of the temporary search pipelines we use for our hybrid search queries: @@ -171,9 +171,9 @@ Here is a template of the temporary search pipelines we use for our hybrid searc } ``` -`norm` is the variable for the normalization technique, `combi` the variable for the combination technique, `lexicalness` is the lexical search weight and `neuralness` is the neural search weight. +`norm` is the variable for the normalization technique, `combi` is the variable for the combination technique, `lexicalness` is the lexical search weight, and `neuralness` is the neural search weight. -The neural part of the hybrid query is searching in a field with embeddings that were created based on the title of a product with the model `all-MiniLM-L6-v2`: +The neural part of the hybrid query searches in a field with embeddings that were created based on the title of a product with the model `all-MiniLM-L6-v2`: ``` { @@ -186,7 +186,7 @@ The neural part of the hybrid query is searching in a field with embeddings that } ``` -Using the queries of the training dataset and retrieving the results, we calculate the three search metrics DCG@10, NDCG@10 and Precision@10. For the small dataset, there is one pipeline configuration that scores best for all three metrics. The pipeline uses the l2 norm, arithmetic mean, a lexical search weight of 0.4 and a neural search weight of 0.6. +Using the queries of the training dataset and retrieving the results, we calculate the three search metrics DCG@10, NDCG@10, and Precision@10. For the small dataset, there is one pipeline configuration that scores best for all three metrics. The pipeline uses the l2 norm, arithmetic mean, a lexical search weight of 0.4, and a neural search weight of 0.6. The following metrics are calculated: @@ -194,28 +194,28 @@ The following metrics are calculated: * NDCG: 0.26 * Precision: 0.29 -Applying the potentially best hybrid search parameter combination to the test set and calculating the metrics for these queries results in the following numbers: +Applying the potentially best hybrid search parameter combination to the test set and calculating the metrics for these queries results in the following numbers. -| Metric | Baseline BM25 - Small | Global Hybrid Search Optimizer - Small | Baseline BM25 - Large | Global Hybrid Search Optimizer - Large | +| Metric | Baseline BM25 – Small | Global Hybrid Search Optimizer – Small | Baseline BM25 – Large | Global Hybrid Search Optimizer – Large | | :---: | :---: | :---: | :---: | :---: | | DCG@10 | 9.65 | 9.99 | 8.82 | 9.30 | | NDCG@10 | 0.24 | 0.26 | 0.23 | 0.25 | | Precision@10 | 0.27 | 0.29 | 0.24 | 0.27 | -We can see improvements across all metrics for both datasets. To recap, up to here, we did the following: +Improvements are seen across all metrics for both datasets. To recap, up to this point, we performed the following steps: -* Create a query set by randomly sampling -* Generate judgments (to be precise, we only used the existing judgments of the ESCI dataset) -* Calculate search metrics for a baseline -* Try out several hybrid search combinations -* Compare search metrics +* Create a query set by randomly sampling. +* Generate judgments (to be precise, we only used the existing judgments of the ESCI dataset). +* Calculate search metrics for a baseline. +* Try out several hybrid search combinations. +* Compare search metrics. Two things are important to note: -* While the systematic approach can be transferred to other applications, the experiment results cannot\! It is necessary to always evaluate and experiment with your own data. -* The ESCI dataset does not have 100% coverage of the judgments. On average we saw roughly 35% judgment coverage among the top 10 retrieved results per query. This leaves us with some uncertainty. +* While the systematic approach can be transferred to other applications, the experiment results cannot. It is necessary to always evaluate and experiment with your own data. +* The ESCI dataset does not provide 100% judgment coverage. On average we saw roughly 35% judgment coverage among the top 10 retrieved results per query. This leaves us with some uncertainty. -The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above parameter values. But of course there are queries that are winners and queries that are losers when doing this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average, not every query will profit from the configuration. +The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above parameter values. But of course there are queries that are winners and queries that are losers when conducting this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average, not every query will profit from the configuration. The following chart shows the DCG@10 values of the training queries of the small query set. The x-axis represents the search pipeline with l2 norm, arithmetic mean, 0.1 lexical search weight and 0.9 neural search weight (configuration A). The y-axis represents the search pipeline with identical normalization and combination technique but switched weights: 0.9 lexical search weight, 0.1 neural search weight (configuration B). @@ -229,114 +229,114 @@ As we strive for having winners only this now leads us to the question: improvem We call identifying a suitable configuration individually per hybrid search query *dynamic hybrid search optimization*. To move in that direction we treat hybrid search as a query understanding challenge: by understanding certain features of the query we develop an approach to predict the “neuralness” of a query. “Neuralness” is used as the term describing the neural search weight for the hybrid search queries. -You may ask: why predict only the “neuralness” and none of the other parameter values? The results of the global hybrid search optimizer (large query set) showed us that the majority of search configurations share two parameter values: the l2 normalization technique and the arithmetic mean as the combination technique. +You may ask: Why predict only the "neuralness" and none of the other parameter values? The results of the global hybrid search optimizer (large query set) showed us that the majority of search configurations share two parameter values: the l2 normalization technique and the arithmetic mean as the combination technique. -Looking at the top 5 configurations per search metric (DCG@10, NDCG@10 and Precision@10) only five out of the 15 pipelines have `min_max` as an alternative normalization technique and none of these configurations has another combination technique. +Looking at the top 5 configurations per search metric (DCG@10, NDCG@10, and Precision@10), only 5 out of the 15 pipelines have `min_max` as an alternative normalization technique, and none of these configurations has another combination technique. -With that knowledge we assume the l2 normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. +With this knowledge we assume the l2 normalization and the arithmetic mean combination technique to be best suited throughout the whole dataset. -That leaves us with the parameter values for the neural search weight and the lexical search weight. By predicting one we can calculate the other by subtracting the prediction from 1: by predicting the “neuralness” we can calculate the “lexicalness” by 1 - “neuralness”. +That leaves us with the parameter values for the neural search weight and the lexical search weight. By predicting one we can calculate the other by subtracting the prediction from 1: by predicting the "neuralness" we can calculate the "lexicalness" by 1 - "neuralness". -To validate our hypothesis that we came up with a couple of feature groups and features within these groups. Afterwards we trained machine learning models to predict an expected NDCG value for a given “neuralness” of a query. +To validate our hypothesis, we created a couple of feature groups and features within these groups. Afterwards we trained machine learning models to predict an expected NDCG value for the given "neuralness" of a query. ## Feature groups and features -We divide the features into three groups: query features, lexical search result features and neural search result features: +We divide the features into three groups: query features, lexical search result features, and neural search result features: -* Query features: these features describe the user query string. -* Lexical search result features: these features describe the results that the user query retrieves when executed as a lexical search. -* Neural search result features: these features describe the results that the user query retrieves as a neural search. +* Query features: These features describe the user query string. +* Lexical search result features: These features describe the results that the user query retrieves when executed as a lexical search. +* Neural search result features: These features describe the results that the user query retrieves as a neural search. ### Query features -* Number of terms: how many terms does the user query have? -* Query length: how long is the user query (measured in characters)? -* Contains number: does the query contain one or more numbers? -* Contains special character: does the query contain one or more special characters (non-alphanumeric characters)? +* Number of terms: How many terms does the user query have? +* Query length: How long is the user query (measured in characters)? +* Contains number: Does the query contain one or more numbers? +* Contains special character: Does the query contain one or more special characters (non-alphanumeric characters)? ### Lexical search result features -* Number of results: the number of results for the lexical query. -* Maximum title score: the maximum score of the titles of the retrieved top 10 documents. The scores are BM25 scores calculated individually per result set. That means that the BM25 score is not calculated on the whole index but only on the retrieved subset for the query, making the scores more comparable to each other and less prone to outliers that could result from high IDF values for very rare query terms. -* Sum of title scores: the sum of the title scores of the top 10 documents, again calculated per-result set. We use the sum of the scores (and no average value) as an aggregate to have a measure of how relevant all retrieved top 10 titles are. BM25 scores are not normalized so using the sum instead of the average seemed reasonable. +* Number of results: The number of results for the lexical query. +* Maximum title score: The maximum score of the titles of the retrieved top 10 documents. The scores are BM25 scores calculated individually per result set. That means that the BM25 score is not calculated on the whole index but only on the retrieved subset for the query, making the scores more comparable to each other and less prone to outliers that could result from high IDF values for very rare query terms. +* Sum of title scores: The sum of the title scores of the top 10 documents, again calculated per result set. We use the sum of the scores (and no average value) as an aggregate to measure how relevant all retrieved top 10 titles are. BM25 scores are not normalized, so using the sum instead of the average seemed reasonable. ### Neural search result features -* Maximum semantic score: the maximum semantic score of the retrieved top 10 documents. This is the score we receive for a neural query based on the query’s similarity to the title. -* Average semantic score: By contrast to BM 25 scores the semantic scores are normalized and in the range of 0 to 1\. Using the average score seems more reasonable than going for the sum here. +* Maximum semantic score: The maximum semantic score of the retrieved top 10 documents. This is the score we receive for a neural query based on the query's similarity to the title. +* Average semantic score: In contrast to BM25 scores, the semantic scores are normalized and in the range of 0 to 1. Using the average score seems more reasonable than going for the sum. ## Feature engineering -As training data we used the output of the global hybrid search optimizer. As part of this process we ran every query 66 times: once per hybrid search configuration. For each query we calculated the search metrics and as a result we know per query which pipeline worked best, thus also which “neuralness” (neural search weight) worked best. We used the best NDCG@10 value per query as the metric deciding what the ideal “neuralness” was. +We used the output of the global hybrid search optimizer as training data. As part of this process, we ran every query 66 times: once per hybrid search configuration. For each query we calculated the search metrics, so we know which pipeline worked best per query and thus also which "neuralness" (neural search weight) worked best. We used the best NDCG@10 value per query as the metric to decide the ideal "neuralness." -That leaves us with 250 queries (small query set) or 5,000 queries (large query set) together with their “neuralness” values for which they achieved best NDCG@10 values. Next, we engineered the nine features for each query. This constitutes the training and test data. +That leaves us with 250 queries (small query set) or 5,000 queries (large query set) together with their "neuralness" values for which they achieved the best NDCG@10 values. Next, we engineered the nine features for each query. This constitutes the training and test data. ## Model training and evaluation -With the appropriate data at hand we explored different algorithms and experimented with different model fitting settings to identify patterns and evaluate if we’re on the right track with that approach. -We went for two relatively simple algorithms: linear regression and random forest regression. -We applied cross validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. +With the appropriate data at hand, we explored different algorithms and experimented with different model fitting settings to identify patterns and evaluate whether we're on the right track with that approach. +We used two relatively simple algorithms: linear regression and random forest regression. +We applied cross-validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. -**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (that is when comparing the RMSE scores within one cross validation run for one feature combination). +**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (that is, when comparing the RMSE scores within one cross-validation run for one feature combination). -**Model performance differs among the different algorithms**: the best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset) \- both with different feature combinations though. The more complex model (random forest) is the one that performs better. However, better performance comes with the trade-off of longer training times for this more complex model. +**Model performance differs among the different algorithms**: The best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset)---both with different feature combinations, though. The more complex model (random forest) performs better. However, better performance comes with the trade-off of longer training times for this more complex model. -**Feature combinations of all groups have the lowest RMSE**: the lowest error scores can be achieved when combining features from all three feature groups (query, lexical search result, neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with lexical search result feature combinations only serves as the best alternative. +**Feature combinations of all groups have the lowest RMSE**: The lowest error scores can be achieved when combining features from all three feature groups (query, lexical search result, and neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with lexical search result feature combinations only serves as the best alternative. -This is particularly interesting when thinking about productionizing this: putting an approach like this in production means that features need to be calculated per query during query time. Getting lexical search result features and neural search result features requires running these queries which would add significant latency to the overall query even prior to inference time. +This is particularly interesting when thinking about productionizing this: putting an approach like this in production means that features need to be calculated per query during query time. Getting lexical search result features and neural search result features requires running these queries, which would add significant latency to the overall query even prior to inference time. -The following picture shows the distribution of RMSE scores within one cross validation run when fitting random forest regression models with feature combinations within one group (blue: neural search features, red: lexical result features, green: query features) and across the groups (purple: features from all groups). The feature mix (purple) scores lowest (best), followed by training on lexical search result features only (red). +The following image shows the distribution of RMSE scores within one cross-validation run when fitting random forest regression models with feature combinations within one group (blue: neural search features, red: lexical result features, green: query features) and across the groups (purple: features from all groups). The feature mix (purple) scores lowest (best), followed by training on lexical search result features only (red). Box plot showing the distribution of RMSE scores within one cross validation run when fitting the random forest regression model{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} -The overall picture does not change when looking at the numbers for the linear model: +The overall picture does not change when looking at the numbers for the linear model. Box plot showing the distribution of RMSE scores within one cross validation run when fitting the linear regression model ## Model testing -Let’s look how the trained models perform when applying them dynamically on our test set. -For each query of the test set we engineer the features and let the model make the inference for the “neuralness” values between 0.0 and 1.0, since “neuralness” also is a feature that we pass into the model. We then take the neuralness value that resulted in the highest prediction which is the best NDCG value. By knowing the “neuralness” we can calculate the “lexicalness” by subtracting the “neuralness” from 1. +Let's look at how the trained models perform when applying them dynamically on our test set. +For each query of the test set we engineer the features and let the model make the inference for the "neuralness" values between 0.0 and 1.0 because "neuralness" is also a feature that we pass into the model. We then take the "neuralness" value that resulted in the highest prediction, which is the best NDCG value. By knowing the "neuralness" we can calculate the "lexicalness" by subtracting the "neuralness" from 1. -We again use the l2 norm and arithmetic mean as our hybrid search normalization and combination parameter values as they scored best in the global hybrid search optimizer experiment. With that we build the hybrid query, execute it, retrieve the results and calculate the search metrics like in the baseline and global hybrid search optimizer. +We again use the l2 norm and arithmetic mean as our hybrid search normalization and combination parameter values because they scored best in the global hybrid search optimizer experiment. With that, we build the hybrid query, execute it, retrieve the results, and calculate the search metrics like with the baseline and global hybrid search optimizer. -Metrics for the small dataset: +The following are the metrics for the small dataset. -| Metric | Baseline BM25 | Global Hybrid Search Optimizer | Dynamic Hybrid Search Optimizer - Linear Model | Dynamic Hybrid Search Optimizer - Random Forest Model | +| Metric | Baseline BM25 | Global Hybrid Search Optimizer | Dynamic Hybrid Search Optimizer – Linear Model | Dynamic Hybrid Search Optimizer – Random Forest Model | | :---: | :---: | :---: | :---: | :---: | | DCG@10 | 9.65 | 9.99 | 10.92 | 10.92 | | NDCG@10 | 0.24 | 0.26 | 0.28 | 0.28 | | Precision@10 | 0.27 | 0.29 | 0.32 | 0.32 | -Metrics for the large dataset: +The following are the metrics for the large dataset. -| Metric | Baseline BM25 | Global Hybrid Search Optimizer | Dynamic Hybrid Search Optimizer - Linear Model | Dynamic Hybrid Search Optimizer - Random Forest Model | +| Metric | Baseline BM25 | Global Hybrid Search Optimizer | Dynamic Hybrid Search Optimizer – Linear Model | Dynamic Hybrid Search Optimizer – Random Forest Model | | :---: | :---: | :---: | :---: | :---: | | DCG@10 | 8.82 | 9.30 | 10.13 | 10.13 | | NDCG@10 | 0.23 | 0.25 | 0.27 | 0.27 | | Precision@10 | 0.24 | 0.27 | 0.29 | 0.29 | -Looking at these numbers shows us a steady positive trend starting from the baseline going all the way to the dynamic predictions of lexicalness and neuralness per query. The large dataset shows a DCG increase of 8.9% rising from 9.3 to 10.13, the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 7.4%for the large dataset, 10.3% for the small dataset, Precision shows an improvement of 8% for the large dataset and 7.7% for the small dataset. +Looking at these numbers shows us a steady positive trend starting from the baseline and going all the way to the dynamic predictions of "lexicalness" and "neuralness" per query. The large dataset shows a DCG increase of 8.9%, rising from 9.3 to 10.13, and the small dataset shows an increase of 9.3%. The other metrics increase as well: NDCG shows an improvement of 7.4% for the large dataset and 10.3% for the small dataset, and Precision shows an improvement of 8% for the large dataset and 7.7% for the small dataset. -Interestingly, both models score exactly equally. The reason for this is that while they both predict different NDCG values, they predict the best ones with the same “neuralness” as an input feature. So while the models may differ in RMSE scores during the evaluation phase, they provide equal results when applied to the test set. +Interestingly, both models score exactly equally. The reason for this is that while they both predict different NDCG values, they predict the best ones with the same "neuralness" as an input feature. So while the models may differ in RMSE scores during the evaluation phase, they provide equal results when applied to the test set. -Despite the low judgement coverage we see improvements for all metrics. This gives us confidence that this approach can provide value for search systems not only switching from lexical to hybrid search but also those who already are in production but have never used any systematic process to evaluate and identify the best settings. +Despite the low judgement coverage, we see improvements for all metrics. This gives us confidence that this approach can provide value not only for search systems switching from lexical to hybrid search but also for those that are already are in production but have never used any systematic process to evaluate and identify the best settings. # Conclusion -We provide a systematic approach to optimizing hybrid search in OpenSearch based on its current state and capabilities (normalization & combination techniques). The results look promising especially given the low judgment coverage that the ESCI dataset has. +We provide a systematic approach to optimizing hybrid search in OpenSearch based on its current state and capabilities (normalization and combination techniques). The results look promising, especially given the low judgment coverage provided by the ESCI dataset. -We encourage everyone to adopt the approach and explore its usefulness in their domain with their dataset. We are looking forward to hearing about the experimentation results the community has with the provided approach. +We encourage everyone to adopt the approach and explore its usefulness with their dataset. We look forward to hearing the community's feedback on the provided approach. # Future work The currently planned next steps include replicating the approach with a dataset that has higher judgment coverage and covers a different domain in order to determine its generalizability. -Optimizing hybrid search typically is not the first step in search result quality optimization. Optimizing lexical search results first is especially important as the lexical search query is part of the hybrid search query. Bayesian optimization is an efficient technique to efficiently identify the best set of fields and field weights, sometimes also referred to as learning to boost. +Optimizing hybrid search is not typically the first step in search result quality optimization. Optimizing lexical search results first is especially important because the lexical search query is part of the hybrid search query. Bayesian optimization is an efficient technique for efficiently identifying the best set of fields and field weights, sometimes also referred to as "learning to boost." -The straightforward approach of trying out 66 different combinations can be created more elegantly by applying a technique like Bayesian optimization as well. In particular for large search indexes and a large amount of queries we expect this to result in a performance improvement. +The straightforward approach of trying out 66 different combinations can be performed more elegantly by applying a technique like Bayesian optimization as well. In particular, we expect this to result in a performance improvement for large search indexes and large numbers of queries. -Reciprocal rank fusion is another way of combining lexical search and neural search, currently under active development: +Reciprocal rank fusion, currently under active development, is another way of combining lexical search and neural search: * [https://github.com/opensearch-project/neural-search/issues/865](https://github.com/opensearch-project/neural-search/issues/865) * [https://github.com/opensearch-project/neural-search/issues/659](https://github.com/opensearch-project/neural-search/issues/659) -We also plan to include this technique, as well to identify the best way of running hybrid search dynamically per query. +We also plan to include this technique and to identify the best way of running hybrid search dynamically per query. From 7ab18c89744b1a24f742bd3b7dc1c4b23c3e8a72 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:48:34 +0100 Subject: [PATCH 14/18] Update 2024-12-xx-hybrid-search-optimization.md Updates as per the latest review Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- .../2024-12-xx-hybrid-search-optimization.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 920c3d56c..a0c8eed20 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -33,14 +33,14 @@ We treat hybrid search configuration as a parameter optimization problem. The pa * Two [normalization techniques: `l2` and `min_max`](https://opensearch.org/blog/How-does-the-rank-normalization-work-in-hybrid-search/). * Three combination techniques: arithmetic mean, harmonic mean, geometric mean. -* The lexical and neural search weights, values in the range from 0 to 1. +* The lexical and neural search weights, which are values ranging from 0 to 1. With this knowledge we can define a collection of parameter combinations to try out and compare. To follow this path we need three things: 1. Query set: A collection of queries. 2. Judgments: A collection of ratings that indicate the relevance of a result for a given query. -3. Search quality metrics: A numeric expression of how well the search system performs in returning relevant documents for queries. +3. Search quality metrics: A numeric expression indicating how well the search system performs in returning relevant documents for queries. ## Query set @@ -215,15 +215,15 @@ Two things are important to note: * While the systematic approach can be transferred to other applications, the experiment results cannot. It is necessary to always evaluate and experiment with your own data. * The ESCI dataset does not provide 100% judgment coverage. On average we saw roughly 35% judgment coverage among the top 10 retrieved results per query. This leaves us with some uncertainty. -The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above parameter values. But of course there are queries that are winners and queries that are losers when conducting this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average, not every query will profit from the configuration. +The improvements tell us that we optimize our metrics on average when switching to hybrid search with the above parameter values. But of course there are queries that benefit (as in their search quality metrics improve) and queries that do not benefit (as in their search quality metrics decrease) when conducting this switch. This is something we can virtually always observe when comparing two search configurations with each other. While one configuration outperforms the other on average, not every query will profit from the configuration. -The following chart shows the DCG@10 values of the training queries of the small query set. The x-axis represents the search pipeline with l2 norm, arithmetic mean, 0.1 lexical search weight and 0.9 neural search weight (configuration A). The y-axis represents the search pipeline with identical normalization and combination technique but switched weights: 0.9 lexical search weight, 0.1 neural search weight (configuration B). +The following chart shows the DCG@10 values of the training queries of the small query set. The x-axis represents the search pipeline with l2 norm, arithmetic mean, 0.1 lexical search weight, and 0.9 neural search weight (configuration A). The y-axis represents the search pipeline with an identical normalization and combination technique but switched weights: 0.9 lexical search weight and 0.1 neural search weight (configuration B). Scatter Plot of DCG values for lexical-heavy search configuration and Neural-heavy search configuration{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} -The clearest winners of configuration B are those that are located on the y-axis: they have a DCG score of 0 for this configuration. And for configuration A some even score above 15. +The queries with the highest search quality metrics improvements of configuration B are those that are located on the y-axis: they have a DCG score of 0 for this configuration. And for configuration A some even score above 15. -As we strive for having winners only this now leads us to the question: improvements on average are fine but how can we tackle this in a more targeted way to come up with an approach that provides us the best configuration per-query instead of one good configuration for all queries? +As we strive for improving the search quality metrics for all queries this now leads us to the question: improvements on average are fine but how can we tackle this in a more targeted way to come up with an approach that provides us the best configuration per-query instead of one good configuration for all queries? # Dynamic hybrid search optimizer @@ -245,7 +245,7 @@ We divide the features into three groups: query features, lexical search result * Query features: These features describe the user query string. * Lexical search result features: These features describe the results that the user query retrieves when executed as a lexical search. -* Neural search result features: These features describe the results that the user query retrieves as a neural search. +* Neural search result features: These features describe the results that the user query retrieves when executed as a neural search. ### Query features @@ -263,7 +263,7 @@ We divide the features into three groups: query features, lexical search result ### Neural search result features * Maximum semantic score: The maximum semantic score of the retrieved top 10 documents. This is the score we receive for a neural query based on the query's similarity to the title. -* Average semantic score: In contrast to BM25 scores, the semantic scores are normalized and in the range of 0 to 1. Using the average score seems more reasonable than going for the sum. +* Average semantic score: In contrast to BM25 scores, the semantic scores are normalized and in the range of 0 to 1. Using the average score seems more reasonable than attempting to calculate the sum. ## Feature engineering @@ -271,17 +271,17 @@ We used the output of the global hybrid search optimizer as training data. As pa That leaves us with 250 queries (small query set) or 5,000 queries (large query set) together with their "neuralness" values for which they achieved the best NDCG@10 values. Next, we engineered the nine features for each query. This constitutes the training and test data. -## Model training and evaluation +## Model training and evaluation findings -With the appropriate data at hand, we explored different algorithms and experimented with different model fitting settings to identify patterns and evaluate whether we're on the right track with that approach. +With the appropriate data at hand, we explored different algorithms and experimented with different model fitting settings to identify patterns and evaluate whether our approach was suitable. We used two relatively simple algorithms: linear regression and random forest regression. We applied cross-validation, regularization, and tried out all different feature combinations. This resulted in interesting findings that are summarized in the following section. -**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. It also results in less variation of the RMSE scores within the cross-validation runs (that is, when comparing the RMSE scores within one cross-validation run for one feature combination). +**Dataset size matters**: Working with the differently sized datasets revealed that the amount of data matters when training and evaluating the models. The larger dataset reported a smaller Root Mean Squared Error compared to the smaller dataset. The larger dataset also showed less variation of the RMSE scores within the cross-validation runs (that is, when comparing the RMSE scores within one cross-validation run for one feature combination). **Model performance differs among the different algorithms**: The best RMSE score for the random forest regressor was 0.18 compared to 0.22 for the best linear regression model (large dataset)---both with different feature combinations, though. The more complex model (random forest) performs better. However, better performance comes with the trade-off of longer training times for this more complex model. -**Feature combinations of all groups have the lowest RMSE**: The lowest error scores can be achieved when combining features from all three feature groups (query, lexical search result, and neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with lexical search result feature combinations only serves as the best alternative. +**Feature combinations of all groups have the lowest RMSE**: The lowest error scores can be achieved when combining features from all three feature groups (query, lexical search result, and neural search result). Looking at RMSE scores for feature combinations within the feature groups shows that working with lexical search result feature combinations serves as the best alternative. This is particularly interesting when thinking about productionizing this: putting an approach like this in production means that features need to be calculated per query during query time. Getting lexical search result features and neural search result features requires running these queries, which would add significant latency to the overall query even prior to inference time. @@ -293,7 +293,7 @@ The overall picture does not change when looking at the numbers for the linear m ## Model testing -Let's look at how the trained models perform when applying them dynamically on our test set. +Let's look at how the trained models perform when applying them dynamically to our test set. For each query of the test set we engineer the features and let the model make the inference for the "neuralness" values between 0.0 and 1.0 because "neuralness" is also a feature that we pass into the model. We then take the "neuralness" value that resulted in the highest prediction, which is the best NDCG value. By knowing the "neuralness" we can calculate the "lexicalness" by subtracting the "neuralness" from 1. We again use the l2 norm and arithmetic mean as our hybrid search normalization and combination parameter values because they scored best in the global hybrid search optimizer experiment. With that, we build the hybrid query, execute it, retrieve the results, and calculate the search metrics like with the baseline and global hybrid search optimizer. From 7e0d427956d9c96dba804563d91c52fbadc184b0 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:52:35 +0100 Subject: [PATCH 15/18] Update 2024-12-xx-hybrid-search-optimization.md change date, last change from editorial review Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index a0c8eed20..c29448a0d 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -3,7 +3,7 @@ layout: post title: "Optimizing hybrid search in OpenSearch" authors: - dwrigley -date: 2024-12-xx +date: 2024-12-30 categories: - technical-posts - community @@ -227,7 +227,7 @@ As we strive for improving the search quality metrics for all queries this now l # Dynamic hybrid search optimizer -We call identifying a suitable configuration individually per hybrid search query *dynamic hybrid search optimization*. To move in that direction we treat hybrid search as a query understanding challenge: by understanding certain features of the query we develop an approach to predict the “neuralness” of a query. “Neuralness” is used as the term describing the neural search weight for the hybrid search queries. +We call identifying a suitable configuration individually per hybrid search query *dynamic hybrid search optimization*. To move in that direction we treat hybrid search as a query understanding challenge: by understanding certain features of the query, we develop an approach to predict the "neuralness" of a query. "Neuralness" is used to describe the neural search weight for the hybrid search queries. You may ask: Why predict only the "neuralness" and none of the other parameter values? The results of the global hybrid search optimizer (large query set) showed us that the majority of search configurations share two parameter values: the l2 normalization technique and the arithmetic mean as the combination technique. From 42ac7bacbf19ee7555588abeee895e1a75817b44 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 17:54:09 +0100 Subject: [PATCH 16/18] Update 2024-12-xx-hybrid-search-optimization.md add feedback link to OpenSearch forum Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index c29448a0d..8301d9e9c 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -324,7 +324,7 @@ Despite the low judgement coverage, we see improvements for all metrics. This gi We provide a systematic approach to optimizing hybrid search in OpenSearch based on its current state and capabilities (normalization and combination techniques). The results look promising, especially given the low judgment coverage provided by the ESCI dataset. -We encourage everyone to adopt the approach and explore its usefulness with their dataset. We look forward to hearing the community's feedback on the provided approach. +We encourage everyone to adopt the approach and explore its usefulness with their dataset. We look forward to hearing the community's feedback on the provided approach on the [OpenSearch forum](https://forum.opensearch.org/). # Future work From 61abdbeab3508928b044b72df1cc686dc3a3e387 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 23 Dec 2024 18:16:58 +0100 Subject: [PATCH 17/18] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- _posts/2024-12-xx-hybrid-search-optimization.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-xx-hybrid-search-optimization.md index 8301d9e9c..eb72ae07a 100644 --- a/_posts/2024-12-xx-hybrid-search-optimization.md +++ b/_posts/2024-12-xx-hybrid-search-optimization.md @@ -221,9 +221,9 @@ The following chart shows the DCG@10 values of the training queries of the small Scatter Plot of DCG values for lexical-heavy search configuration and Neural-heavy search configuration{:style="width: 100%; max-width: 800px; height: auto; text-align: center"} -The queries with the highest search quality metrics improvements of configuration B are those that are located on the y-axis: they have a DCG score of 0 for this configuration. And for configuration A some even score above 15. +The queries with the highest search quality metric improvements of configuration B are those that are located on the y-axis: they have a DCG score of 0 for this configuration. And for configuration A some even score above 15. -As we strive for improving the search quality metrics for all queries this now leads us to the question: improvements on average are fine but how can we tackle this in a more targeted way to come up with an approach that provides us the best configuration per-query instead of one good configuration for all queries? +Striving to improve the search quality metrics for all queries raises the following question: improvements on average are fine, but how can we tackle this in a more targeted way to come up with an approach that provides the best configuration per query instead of one good configuration for all queries? # Dynamic hybrid search optimizer From 5becade6828ab7f0813c1ba8c524fa4a50bb75a4 Mon Sep 17 00:00:00 2001 From: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> Date: Mon, 30 Dec 2024 18:18:32 +0100 Subject: [PATCH 18/18] Rename 2024-12-xx-hybrid-search-optimization.md to 2024-12-30-hybrid-search-optimization.md Signed-off-by: Daniel Wrigley <54574577+wrigleyDan@users.noreply.github.com> --- ...h-optimization.md => 2024-12-30-hybrid-search-optimization.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename _posts/{2024-12-xx-hybrid-search-optimization.md => 2024-12-30-hybrid-search-optimization.md} (100%) diff --git a/_posts/2024-12-xx-hybrid-search-optimization.md b/_posts/2024-12-30-hybrid-search-optimization.md similarity index 100% rename from _posts/2024-12-xx-hybrid-search-optimization.md rename to _posts/2024-12-30-hybrid-search-optimization.md