Releases: sktime/pytorch-forecasting
Releases · sktime/pytorch-forecasting
Adding N-HiTS network (N-BEATS successor)
Added
- Added new
N-HiTSnetwork that has consistently beatenN-BEATS(#890) - Allow using torchmetrics as loss metrics (#776)
- Enable fitting
EncoderNormalizer()with limited data history usingmax_lengthargument (#782) - More flexible
MultiEmbedding()with convenienceoutput_sizeandinput_sizeproperties (#829) - Fix concatentation of attention (#902)
Fixed
- Fix pip install via github (#798)
Contributors
- jdb78
- christy
- lukemerrick
- Seon82
Maintenance Release
Maintenance Release (26/09/2021)
Added
- Use target name instead of target number for logging metrics (#588)
- Optimizer can be initialized by passing string, class or function (#602)
- Add support for multiple outputs in Baseline model (#603)
- Added Optuna pruner as optional parameter in
TemporalFusionTransformer.optimize_hyperparameters(#619) - Dropping support for Python 3.6 and starting support for Python 3.9 (#639)
Fixed
- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (#550)
- Added missing transformation of prediction for MLP (#602)
- Fixed logging hyperparameters (#688)
- Ensure MultiNormalizer fit state is detected (#681)
- Fix infinite loop in TimeDistributedEmbeddingBag (#672)
Contributors
- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K
Simplified API
Breaking changes
-
Removed
dropout_categoricalsparameter fromTimeSeriesDataSet.
Usecategorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)) instead (#518) -
Rename parameter
allow_missingsforTimeSeriesDataSettoallow_missing_timesteps(#518) -
Transparent handling of transformations. Forward methods should now call two new methods (#518):
transform_outputto explicitly rescale the network outputs into the de-normalized spaceto_network_outputto create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Onlypredictionis still required which is the main network output.
Example:
def forward(self, x): normalized_prediction = self.module(x) prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"]) return self.to_network_output(prediction=prediction)
Added
- Improved validation of input parameters of TimeSeriesDataSet (#518)
Fixed
Generic distribution loss(es)
Added
- Allow lists for multiple losses and normalizers (#405)
- Warn if normalization is with scale
< 1e-7(#429) - Allow usage of distribution losses in all settings (#434)
Fixed
- Fix issue when predicting and data is on different devices (#402)
- Fix non-iterable output (#404)
- Fix problem with moving data to CPU for multiple targets (#434)
Contributors
- jdb78
- domplexity
Simple models
Added
- Adding a filter functionality to the timeseries datasset (#329)
- Add simple models such as LSTM, GRU and a MLP on the decoder (#380)
- Allow usage of any torch optimizer such as SGD (#380)
Fixed
- Moving predictions to CPU to avoid running out of memory (#329)
- Correct determination of
output_sizefor multi-target forecasting with the TemporalFusionTransformer (#328) - Tqdm autonotebook fix to work outside of Jupyter (#338)
- Fix issue with yaml serialization for TensorboardLogger (#379)
Contributors
- jdb78
- JakeForsey
- vakker
Bugfix release
Added
Fixed
- Underlying data is copied if modified. Original data is not modified inplace (#263)
- Allow plotting of interpretation on passed figure for NBEATS (#280)
- Fix memory leak for plotting and logging interpretation (#311)
- Correct shape of
predict()method output for multi-targets (#268) - Remove cloudpickle to allow GPU trained models to be loaded on CPU devices from checkpoints (#314)
Contributors
- jdb78
- kigawas
- snumumrik
Fix for output transformer
- Added missing output transformation which was switched off by default (#260)
Adding support for lag variables
Adding multi-target support
Added
- Adding support for multiple targets in the TimeSeriesDataSet (#199) and amended tutorials.
- Temporal fusion transformer and DeepAR with support for multiple tagets (#199)
- Check for non-finite values in TimeSeriesDataSet and better validate scaler argument (#220)
- LSTM and GRU implementations that can handle zero-length sequences (#235)
- Helpers for implementing auto-regressive models (#236)
Changed
- TimeSeriesDataSet's
yof the dataloader is a tuple of (target(s), weight) - potentially breaking for model or metrics implementation
Most implementations will not be affected as hooks in BaseModel and MultiHorizonMetric were modified.
Fixed
- Fixed autocorrelation for pytorch 1.7 (#220)
- Ensure reproducibility by replacing python
set()withdict.fromkeys()(mostly TimeSeriesDataSet) (#221) - Ensures BetaDistributionLoss does not lead to infinite loss if actuals are 0 or 1 (#233)
- Fix for GroupNormalizer if scaling by group (#223)
- Fix for TimeSeriesDataSet when using
min_prediction_idx(#226)
Contributors
- jdb78
- JustinNeumann
- reumar
- rustyconover