You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A recent refactoring enforced tasks' preprocessing in a lazy and streaming fashion. The preprocessing first defines stateless preprocessing functions (e.g. resize and normalize for images) and then apply them successively on the dataset one row at a time:
// The preprocessing functions are applied as a map on the dataset:this.dataset.map(this.preprocessing)
Limitations to address:
This functional preprocessing doesn't allow for "stateful" preprocessing. Because of its streaming nature, it is currently not possible to normalize a tabular column since we can't compute general aggregations (e.g. mean and standard deviation of a feature). In other words, if we want to implement new preprocessing functions we can only add functions with one dataset row as sole argument which is very constraining.
The preprocessing state learned during training should be saved to be re-used for test and inference. For example, standardizing the testing set should be done with the training set's mean and standard deviation, not the test's statistics. Therefore, the preprocessing state should be saved and this is currently not supported.
The text was updated successfully, but these errors were encountered:
A recent refactoring enforced tasks' preprocessing in a lazy and streaming fashion. The preprocessing first defines stateless preprocessing functions (e.g.
resize
andnormalize
for images) and then apply them successively on the dataset one row at a time:Limitations to address:
The text was updated successfully, but these errors were encountered: