You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most of the feature preprocessors that we use are based on linear methods. We should look into adding non-linear dimensionality reduction preprocessors, such as:
This paper http://bit.ly/2gbuKey suggests that non-linear dimensionality reduction techniques fail to improve upon PCA in natural data sets; it actually has KernelPCA in the comparison. Since PCA is super fast compared to KernelPCA and other non-linear techniques I would vote against including non-linear stuff.
To get a first insight one could include non-linear preprocessors, run TPOT for 2-3 standard datasets and look into the best pipelines, if any of those preprocessors were included.
Most of the feature preprocessors that we use are based on linear methods. We should look into adding non-linear dimensionality reduction preprocessors, such as:
The text was updated successfully, but these errors were encountered: