You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I noticed that CellOracle utilizes regression models like BaggingRegressor to fit gene expression based on transcription factor expression without dividing the data into training and testing sets. Can I infer that in this scenario, the model is prone to overfitting? Does this strategy provide any benefits for gene expression regulation?
if scaling:
data = gem_scaled[reg_all]
else:
data = gem[reg_all]
label = gem[target_gene]
try: # For sklearn version 1.2 or later
model = BaggingRegressor(estimator=Ridge(alpha=alpha,
solver=solver,
random_state=123),
n_estimators=bagging_number,
bootstrap=True,
max_features=0.8,
n_jobs=n_jobs,
verbose=False,
random_state=123)
except: # For old version of sklearn
model = BaggingRegressor(base_estimator=Ridge(alpha=alpha,
solver=solver,
random_state=123),
n_estimators=bagging_number,
bootstrap=True,
max_features=0.8,
n_jobs=n_jobs,
verbose=False,
random_state=123)
model.fit(data, label)
The text was updated successfully, but these errors were encountered:
Hi, I noticed that CellOracle utilizes regression models like BaggingRegressor to fit gene expression based on transcription factor expression without dividing the data into training and testing sets. Can I infer that in this scenario, the model is prone to overfitting? Does this strategy provide any benefits for gene expression regulation?
The text was updated successfully, but these errors were encountered: