You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran the unsupervised analysis leaving many of the default parameters unchanged
One of them causes an error for my pseudobulks with less that that number of cells
heatmap:
metrics: ['correlation','cosine']
hclust_methods: ['complete']
- n_observations: 1000 # random sampled proportion float (0-1] or absolute number as integer+ n_observations: 1 # random sampled proportion float (0-1] or absolute number as integer
n_features: 0.5 # highly variable features proportion float (0-1] or absolute number as integer
alternative could also be to make the respective line defensive
# downsample observations
if data_or_feature == "observations":
if isinstance(n_observations, float) or n_observations==1:
n_observations = int(math.floor(n_observations * data.shape[0]))
+ if n_observations < data.shape[0]:
data = data.sample(n=n_observations, random_state=42)
The text was updated successfully, but these errors were encountered:
bednarsky
changed the title
Improve heatmap: n_observations default to avoid potential error
Improve heatmap: avoid that n_observations default causes error
Nov 16, 2024
bednarsky
changed the title
Improve heatmap: avoid that n_observations default causes error
heatmap: avoid that n_observations default causes error
Nov 16, 2024
bednarsky
changed the title
heatmap: avoid that n_observations default causes error
distance_matrix.py - avoid that config:heatmap:n_observations default causes error
Nov 16, 2024
unsupervised_analysis/workflow/scripts/distance_matrix.py
Lines 40 to 44 in f902eff
The text was updated successfully, but these errors were encountered: