Skip to content

Commit

Permalink
Fixed build errors.
Browse files Browse the repository at this point in the history
  • Loading branch information
debadair committed Jan 7, 2016
1 parent baecf3d commit dbd159c
Show file tree
Hide file tree
Showing 8 changed files with 12 additions and 11 deletions.
2 changes: 1 addition & 1 deletion 060_Distributed_Search/10_Fetch_phase.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,6 @@ after page until your servers crumble at the knees.
If you _do_ need to fetch large numbers of docs from your cluster, you can
do so efficiently by disabling sorting with the `scroll` query,
which we discuss <<scan-scroll,later in this chapter>>.
which we discuss <<scroll,later in this chapter>>.
****
2 changes: 1 addition & 1 deletion 070_Index_Mgmt/50_Reindexing.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ whole document available to you in Elasticsearch itself. You don't have to
rebuild your index from the database, which is usually much slower.

To reindex all of the documents from the old index efficiently, use
<<scan-scroll,_scroll_>> to retrieve batches((("using in reindexing documents"))) of documents from the old index,
<<scroll,_scroll_>> to retrieve batches((("using in reindexing documents"))) of documents from the old index,
and the <<bulk,`bulk` API>> to push them into the new index.

.Reindexing in Batches
Expand Down
8 changes: 4 additions & 4 deletions 300_Aggregations/20_basic_example.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ using a simple aggregation. We will do this using a `terms` bucket:
GET /cars/transactions/_search
{
"size" : 0,
"aggs" : {
"popular_colors" : {
"terms" : {
"aggs" : { <1>
"popular_colors" : { <2>
"terms" : { <3>
"field" : "color"
}
}
Expand All @@ -62,7 +62,7 @@ GET /cars/transactions/_search

<1> Aggregations are placed under the ((("aggregations", "aggs parameter")))top-level `aggs` parameter (the longer `aggregations`
will also work if you prefer that).
<2> We then name the aggregation whatever we want: `colors`, in this example
<2> We then name the aggregation whatever we want: `popular_colors`, in this example
<3> Finally, we define a single bucket of type `terms`.

Aggregations are executed in the context of search results,((("searching", "aggregations executed in context of search results"))) which means it is
Expand Down
4 changes: 2 additions & 2 deletions 400_Relationships/25_Concurrency.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The more important issue is that, if the user were to change his name, all
of his blog posts would need to be updated too. Fortunately, users don't
often change names. Even if they did, it is unlikely that a user would have
written more than a few thousand blog posts, so updating blog posts with
the <<scan-scroll,`scroll`>> and <<bulk,`bulk`>> APIs would take less than a
the <<scroll,`scroll`>> and <<bulk,`bulk`>> APIs would take less than a
second.

However, let's consider a more complex scenario in which changes are common, far
Expand Down Expand Up @@ -182,7 +182,7 @@ PUT /fs/file/1?version=2 <1>
We can even rename a directory, but this means updating all of the files that
exist anywhere in the path hierarchy beneath that directory. This may be
quick or slow, depending on how many files need to be updated. All we would
need to do is to use <<scan-scroll,`scroll`>> to retrieve all the
need to do is to use <<scroll,`scroll`>> to retrieve all the
files, and the <<bulk,`bulk` API>> to update them. The process isn't
atomic, but all files will quickly move to their new home.

2 changes: 1 addition & 1 deletion 400_Relationships/26_Concurrency_solutions.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ PUT /fs/lock/_bulk
--------------------------
<1> The `refresh` call ensures that all `lock` documents are visible to
the search request.
<2> You can use a <<scan-scroll,`scroll`>> query when you need to retrieve large
<2> You can use a <<scroll,`scroll`>> query when you need to retrieve large
numbers of results with a single search request.

Document-level locking enables fine-grained access control, but creating lock
Expand Down
2 changes: 1 addition & 1 deletion 410_Scaling/45_Index_per_timeframe.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ data.

If we were to have one big index for documents of this type, we would soon run
out of space. Logging events just keep on coming, without pause or
interruption. We could delete the old events with a <<scan-scroll,`scroll`>>
interruption. We could delete the old events with a <<scroll,`scroll`>>
query and bulk delete, but this approach is _very inefficient_. When you delete a
document, it is only _marked_ as deleted (see <<deletes-and-updates>>). It won't
be physically deleted until the segment containing it is merged away.
Expand Down
2 changes: 1 addition & 1 deletion 410_Scaling/75_One_big_user.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ PUT /baking_v1
------------------------------

The next step is to migrate the data from the shared index into the dedicated
index, which can be done using a <<scan-scroll, `scroll`>> query and the
index, which can be done using a <<scroll, `scroll`>> query and the
<<bulk,`bulk` API>>. As soon as the migration is finished, the index alias
can be updated to point to the new index:

Expand Down
1 change: 1 addition & 0 deletions 510_Deployment/40_config.asciidoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
[[important-configuration-changes]]
=== Important Configuration Changes
Elasticsearch ships with _very good_ defaults,((("deployment", "configuration changes, important")))((("configuration changes, important"))) especially when it comes to performance-
related settings and options. When in doubt, just leave
Expand Down

0 comments on commit dbd159c

Please sign in to comment.