Skip to content

Commit

Permalink
DOC Sphinx Unexpected unindent warnings (scikit-learn#15130)
Browse files Browse the repository at this point in the history
  • Loading branch information
cmarmo authored and thomasjpfan committed Oct 4, 2019
1 parent a47e914 commit 871b251
Show file tree
Hide file tree
Showing 12 changed files with 63 additions and 47 deletions.
1 change: 1 addition & 0 deletions doc/contents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
.. include:: tune_toc.rst

.. Places global toc into the sidebar
:globalsidebartoc: True

=================
Expand Down
1 change: 1 addition & 0 deletions doc/developers/index.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
.. Places global toc into the sidebar
:globalsidebartoc: True

.. _developers_guide:
Expand Down
1 change: 1 addition & 0 deletions doc/preface.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
useful for PDF output as this section is not linked from elsewhere.
.. Places global toc into the sidebar
:globalsidebartoc: True

.. _preface_menu:
Expand Down
1 change: 1 addition & 0 deletions doc/tutorial/index.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
.. Places global toc into the sidebar
:globalsidebartoc: True

.. _tutorial_menu:
Expand Down
1 change: 1 addition & 0 deletions doc/user_guide.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
.. Places global toc into the sidebar
:globalsidebartoc: True

.. title:: User guide: contents
Expand Down
45 changes: 22 additions & 23 deletions examples/neighbors/approximate_nearest_neighbors.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,29 +18,28 @@
compatibility reasons, one extra neighbor is computed when
`mode == 'distance'`. Please note that we do the same in the proposed wrappers.
Sample output:
```
Benchmarking on MNIST_2000:
---------------------------
AnnoyTransformer: 0.583 sec
NMSlibTransformer: 0.321 sec
KNeighborsTransformer: 1.225 sec
TSNE with AnnoyTransformer: 4.903 sec
TSNE with NMSlibTransformer: 5.009 sec
TSNE with KNeighborsTransformer: 6.210 sec
TSNE with internal NearestNeighbors: 6.365 sec
Benchmarking on MNIST_10000:
----------------------------
AnnoyTransformer: 4.457 sec
NMSlibTransformer: 2.080 sec
KNeighborsTransformer: 30.680 sec
TSNE with AnnoyTransformer: 30.225 sec
TSNE with NMSlibTransformer: 43.295 sec
TSNE with KNeighborsTransformer: 64.845 sec
TSNE with internal NearestNeighbors: 64.984 sec
```
Sample output::
Benchmarking on MNIST_2000:
---------------------------
AnnoyTransformer: 0.583 sec
NMSlibTransformer: 0.321 sec
KNeighborsTransformer: 1.225 sec
TSNE with AnnoyTransformer: 4.903 sec
TSNE with NMSlibTransformer: 5.009 sec
TSNE with KNeighborsTransformer: 6.210 sec
TSNE with internal NearestNeighbors: 6.365 sec
Benchmarking on MNIST_10000:
----------------------------
AnnoyTransformer: 4.457 sec
NMSlibTransformer: 2.080 sec
KNeighborsTransformer: 30.680 sec
TSNE with AnnoyTransformer: 30.225 sec
TSNE with NMSlibTransformer: 43.295 sec
TSNE with KNeighborsTransformer: 64.845 sec
TSNE with internal NearestNeighbors: 64.984 sec
"""
# Author: Tom Dupre la Tour
#
Expand Down
4 changes: 2 additions & 2 deletions sklearn/ensemble/_stacking.py
Original file line number Diff line number Diff line change
Expand Up @@ -342,8 +342,8 @@ class StackingClassifier(ClassifierMixin, _BaseStacking):
`'predict_proba'`, `'decision_function'` or `'predict'` in that
order.
* otherwise, one of `'predict_proba'`, `'decision_function'` or
`'predict'`. If the method is not implemented by the estimator, it
will raise an error.
`'predict'`. If the method is not implemented by the estimator, it
will raise an error.
n_jobs : int, default=None
The number of jobs to run in parallel all `estimators` `fit`.
Expand Down
45 changes: 25 additions & 20 deletions sklearn/ensemble/forest.py
Original file line number Diff line number Diff line change
Expand Up @@ -989,10 +989,11 @@ class RandomForestClassifier(ForestClassifier):
max_samples : int or float, default=None
If bootstrap is True, the number of samples to draw from X
to train each base estimator.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
.. versionadded:: 0.22
Expand Down Expand Up @@ -1277,10 +1278,11 @@ class RandomForestRegressor(ForestRegressor):
max_samples : int or float, default=None
If bootstrap is True, the number of samples to draw from X
to train each base estimator.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
.. versionadded:: 0.22
Expand Down Expand Up @@ -1576,10 +1578,11 @@ class ExtraTreesClassifier(ForestClassifier):
max_samples : int or float, default=None
If bootstrap is True, the number of samples to draw from X
to train each base estimator.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
.. versionadded:: 0.22
Expand Down Expand Up @@ -1841,10 +1844,11 @@ class ExtraTreesRegressor(ForestRegressor):
max_samples : int or float, default=None
If bootstrap is True, the number of samples to draw from X
to train each base estimator.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
.. versionadded:: 0.22
Expand Down Expand Up @@ -2069,10 +2073,11 @@ class RandomTreesEmbedding(BaseForest):
max_samples : int or float, default=None
If bootstrap is True, the number of samples to draw from X
to train each base estimator.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0, 1)`.
.. versionadded:: 0.22
Expand Down
1 change: 1 addition & 0 deletions sklearn/ensemble/partial_dependence.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,7 @@ def plot_partial_dependence(gbrt, X, features, feature_names=None,
----------
gbrt : BaseGradientBoosting
A fitted gradient boosting model.
X : array-like of shape (n_samples, n_features)
The data on which ``gbrt`` was trained.
Expand Down
2 changes: 2 additions & 0 deletions sklearn/inspection/partial_dependence.py
Original file line number Diff line number Diff line change
Expand Up @@ -683,11 +683,13 @@ class PartialDependenceDisplay:
Feature names corrsponding to the indicies in ``features``.
target_idx : int
- In a multiclass setting, specifies the class for which the PDPs
should be computed. Note that for binary classification, the
positive class (index 1) is always used.
- In a multioutput setting, specifies the task for which the PDPs
should be computed.
Ignored in binary classification or classical regression settings.
pdp_lim : dict
Expand Down
4 changes: 2 additions & 2 deletions sklearn/manifold/isomap.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,8 @@ def _fit_transform(self, X):
self.embedding_ = self.kernel_pca_.fit_transform(G)

@property
@deprecated("Attribute training_data_ was deprecated in version 0.22 and "
"will be removed in 0.24.")
@deprecated("Attribute `training_data_` was deprecated in version 0.22 and"
" will be removed in 0.24.")
def training_data_(self):
check_is_fitted(self)
return self.nbrs_._fit_X
Expand Down
4 changes: 4 additions & 0 deletions sklearn/utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -285,10 +285,13 @@ def safe_indexing(X, indices, axis=0):
X : array-like, sparse-matrix, list, pandas.DataFrame, pandas.Series
Data from which to sample rows, items or columns. `list` are only
supported when `axis=0`.
indices : bool, int, str, slice, array-like
- If `axis=0`, boolean and integer array-like, integer slice,
and scalar integer are supported.
- If `axis=1`:
- to select a single column, `indices` can be of `int` type for
all `X` types and `str` only for dataframe. The selected subset
will be 1D, unless `X` is a sparse matrix in which case it will
Expand All @@ -298,6 +301,7 @@ def safe_indexing(X, indices, axis=0):
these containers can be one of the following: `int`, 'bool' and
`str`. However, `str` is only supported when `X` is a dataframe.
The selected subset will be 2D.
axis : int, default=0
The axis along which `X` will be subsampled. `axis=0` will select
rows while `axis=1` will select columns.
Expand Down

0 comments on commit 871b251

Please sign in to comment.