diff --git a/170_Relevance/70_Pluggable_similarities.asciidoc b/170_Relevance/70_Pluggable_similarities.asciidoc index a8b7a2f32..08106c720 100644 --- a/170_Relevance/70_Pluggable_similarities.asciidoc +++ b/170_Relevance/70_Pluggable_similarities.asciidoc @@ -1,68 +1,67 @@ [[pluggable-similarites]] -=== Pluggable similarity algorithms +=== Pluggable Similarity Algorithms Before we move on from relevance and scoring, we will finish this chapter with a more advanced subject: pluggable similarity algorithms.((("similarity algorithms", "pluggable")))((("relevance", "controlling", "using pluggable similarity algorithms"))) While Elasticsearch uses the <> as its default similarity algorithm, -it supports a number of other algorithms out of the box, which are listed -in the {ref}index-modules-similarity.html[Similarity Modules] documentation. +it supports other algorithms out of the box, which are listed +in the http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/pluggable-similarites.html[Similarity Modules] documentation. [[bm25]] ==== Okapi BM25 -The most interesting competitor to TF/IDF and the Vector Space Model is called +The most interesting competitor to TF/IDF and the vector space model is called http://en.wikipedia.org/wiki/Okapi_BM25[_Okapi BM25_], which is considered to be a _state-of-the-art_ ranking function.((("BM25")))((("Okapi BM25", see="BM25"))) BM25 originates from the -http://en.wikipedia.org/wiki/Probabilistic_relevance_model[Probabilistic relevance model], -rather than the Vector Space Model, yet((("probabalistic relevance model"))) the algorithm has a lot in common with -Lucene's Practical Scoring Function. +http://en.wikipedia.org/wiki/Probabilistic_relevance_model[probabilistic relevance model], +rather than the vector space model, yet((("probabalistic relevance model"))) the algorithm has a lot in common with +Lucene's practical scoring function. -Both make use of term frequency, inverse document frequency, and field length +Both use of term frequency, inverse document frequency, and field-length normalization, but the definition of each of these factors is a little different. Rather than explaining the BM25 formula in detail, we will focus on the practical advantages that BM25 offers. [[bm25-saturation]] -===== Term frequency saturation +===== Term-frequency saturation Both TF/IDF and BM25 use <> to distinguish between common (low value) words and uncommon (high value) words.((("inverse document frequency", "use by TF/IDF and BM25"))) Both also -recognise (see <>) that the more often a word appears in a document, the +recognize (see <>) that the more often a word appears in a document, the more likely is it that the document is relevant for that word. However, common words occur commonly. ((("BM25", "term frequency saturation"))) The fact that a common word appears many times in one document is offset by the fact that the word appears many -times in *all* documents. +times in _all_ documents. -However, TF/IDF was designed in an era where it was standard practice to -remove the *most* common words (or _stopwords_, see <>) from the +However, TF/IDF was designed in an era when it was standard practice to +remove the _most_ common words (or _stopwords_, see <>) from the index altogether.((("stopwords", "removal from index"))) The algorithm didn't need to worry about an upper limit for term frequency because the most frequent terms had already been removed. -In Elasticsearch, the `standard` analyzer -- the default for `string` fields --- doesn't remove stopwords because, even though they are words of little +In Elasticsearch, the `standard` analyzer--the default for `string` fields--doesn't remove stopwords because, even though they are words of little value, they do still have some value. The result is that, for very long documents, the sheer number of occurrences of words like `the` and `and` can artificially boost their weight. BM25, on the other hand, does have an upper limit. Terms that appear 5 to 10 times in a document have a significantly larger impact on relevance than terms -that appear just once or twice. However, terms that appear 20 times in a +that appear just once or twice. However, as can be seen in <>, terms that appear 20 times in a document have almost the same impact as terms that appear a thousand times or more. -This is known as _non-linear term frequency saturation_. +This is known as _nonlinear term-frequency saturation_. [[img-bm25-saturation]] .Term frequency saturation for TF/IDF and BM25 image::images/elas_1706.png[Term frequency saturation for TF/IDF and BM25] [[bm25-normalization]] -===== Field length normalization +===== Field-length normalization -In <> we said that Lucene considers shorter fields to have -more weight than longer fields -- the frequency of a term in a field is offset -by the length of the field. However, the Practical Scoring Functions treats +In <>, we said that Lucene considers shorter fields to have +more weight than longer fields: the frequency of a term in a field is offset +by the length of the field. However, the practical scoring function treats all fields in the same way. It will treat all `title` fields (because they are short) as more important than all `body` fields (because they are long). @@ -71,9 +70,9 @@ it considers each field separately by taking the average length of the field into account. It can distinguish between a short `title` field and a `long` title field. -CAUTION: In <> we said that the `title` field has a -``natural'' boost over the `body` field because of its length. This natural -boost disappears with BM25 as differences in field length only apply within a +CAUTION: In <>, we said that the `title` field has a +_natural_ boost over the `body` field because of its length. This natural +boost disappears with BM25 as differences in field length apply only within a single field. [[bm25-tunability]] @@ -82,22 +81,19 @@ single field. One of the nice features of BM25 is that, unlike TF/IDF, it has two parameters that allow it to be tuned: -[horizontal] -__`k1`__:: - +`k1`:: This parameter controls how quickly an increase in term frequency results - in term frequency saturation. The default value is `1.2`. Lower values - result in quicker saturation and higher values in slower saturation. - -__`b`__:: + in term-frequency saturation. The default value is `1.2`. Lower values + result in quicker saturation, and higher values in slower saturation. - This parameter controls how much effect field length normalization should +`b`:: + This parameter controls how much effect field-length normalization should have. A value of `0.0` disables normalization completely, and a value of `1.0` normalizes fully. The default is `0.75`. The practicalities of tuning BM25 are another matter. The default values for -__`k1`__ and __`b`__ should be suitable for most document collections, but the +`k1` and `b` should be suitable for most document collections, but the optimal values really depend on the collection. Finding good values for your collection is a matter of adjusting, checking, and adjusting again.