Skip to content

Commit

Permalink
Fix defaults (#127)
Browse files Browse the repository at this point in the history
* update docstrings to correct defaults, make standard scaling False

* update notebook defaults

* Clarify standard_scaling parameter usage.
  • Loading branch information
adamgayoso authored and JonathanShor committed Jul 17, 2019
1 parent d6e38c3 commit 267f510
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 7 deletions.
11 changes: 6 additions & 5 deletions doubletdetection/doubletdetection.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ class BoostClassifier:
use; other genes discarded. Will use all genes when zero.
replace (bool, optional): If False, a cell will be selected as a
synthetic doublet's parent no more than once.
use_phenograph (bool, optional): Set to True to use PhenoGraph clustering.
Defaults to False, which uses louvain clustering implemented in scanpy.
use_phenograph (bool, optional): Set to False to disable PhenoGraph clustering
in exchange for louvain clustering implemented in scanpy. Defaults to True.
phenograph_parameters (dict, optional): Parameter dict to pass directly
to PhenoGraph. Note that we change the PhenoGraph 'prune' default to
True; you must specifically include 'prune': False here to change
Expand All @@ -97,8 +97,9 @@ class BoostClassifier:
results across runs.
verbose (bool, optional): Set to False to silence all normal operation
informational messages. Defaults to True.
standard_scaling (bool, optional): Set to False to disable standard scaling
of normalized count matrix prior to clustering. Defaults to True.
standard_scaling (bool, optional): Set to True to enable standard scaling
of normalized count matrix prior to PCA. Recommended when not using
Phenograph. Defaults to False.
Attributes:
all_log_p_values_ (ndarray): Hypergeometric test natural log p-value per
Expand Down Expand Up @@ -136,7 +137,7 @@ def __init__(
normalizer=None,
random_state=0,
verbose=False,
standard_scaling=True,
standard_scaling=False,
):
self.boost_rate = boost_rate
self.replace = replace
Expand Down
4 changes: 2 additions & 2 deletions tests/notebooks/PBMC_8k_vignette.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@
"source": [
"## Run Doublet Detection\n",
"\n",
"Here we show-off the new backend implementation that uses `scanpy`. This new implementation is over 2x faster than version 2.4.0. To use the previous version of DoubletDetection please add the parameters `use_phenograph=True`, `verbose=True` to the classifier and use the thresholds `p_thresh=1e-7`, `voter_thresh=0.8`. We recommend first using these parameters until we further validate the new implementation."
"Here we show-off the new backend implementation that uses `scanpy`. This new implementation is over 2x faster than version 2.4.0. To use the previous version of DoubletDetection please add the parameters (`use_phenograph=True`, `verbose=True`, `standard_scaling=False`) to the classifier and use the thresholds `p_thresh=1e-7`, `voter_thresh=0.8`. We recommend first using these parameters until we further validate the new implementation."
]
},
{
Expand Down Expand Up @@ -107,7 +107,7 @@
}
],
"source": [
"clf = doubletdetection.BoostClassifier(n_iters=50, use_phenograph=False)\n",
"clf = doubletdetection.BoostClassifier(n_iters=50, use_phenograph=False, standard_scaling=True)\n",
"doublets = clf.fit(raw_counts).predict(p_thresh=1e-16, voter_thresh=0.5)"
]
},
Expand Down

0 comments on commit 267f510

Please sign in to comment.