You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed in the paper, there are many options for dealing with covarietes:
A possible solution may involve training task-specific adaptors that inject the covariates into the pretrained forecasting model (Rahman et al., 2020). As another option, we can build stacking ensembles (Ting & Witten, 1997) of Chronos and other light-weight models that excel at handling covariates such as LightGBM (Ke et al., 2017).
What has your experience been so far with incorporating static and dynamic covariates into the prediction process? Our approach has been so far based on ensembles, where we predict each dimension individually and feed these into LightGBM. However, this implies that we have a pretrained LightGBM model. On the other hand, we'd like to handle these by an already existing pretrained head model, to avoid any model fine tuning specific to a dataset.
What has your experience been with handling these?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
As discussed in the paper, there are many options for dealing with covarietes:
What has your experience been so far with incorporating static and dynamic covariates into the prediction process? Our approach has been so far based on ensembles, where we predict each dimension individually and feed these into LightGBM. However, this implies that we have a pretrained LightGBM model. On the other hand, we'd like to handle these by an already existing pretrained head model, to avoid any model fine tuning specific to a dataset.
What has your experience been with handling these?
Beta Was this translation helpful? Give feedback.
All reactions