Releases: sktime/pytorch-forecasting
v1.5.0
What's Changed
Release focusing on:
- python 3.9 end-of-life
- changes to testing framework.
- New estimators in
pytorch-forecastingv1 and beta v2.
New Contributors
- @Vishnu-Rangiah made their first contribution in #1905
- @Pinaka07 made their first contribution in #1914
- @hubkrieb made their first contribution in #1910
- @Himanshu-Verma-ds made their first contribution in #1488
- @lohraspco made their first contribution in #1926
- @zju-ys made their first contribution in #1924
- @caph1993 made their first contribution in #1944
- @szepeviktor made their first contribution in #1948
- @sanskarmodi8 made their first contribution in #1978
All Contributors
@agobbifbk,
@caph1993,
@cngmid,
@fkiraly,
@fnhirwa,
@Himanshu-Verma-ds,
@hubkrieb,
@jdb78,
@lohraspco,
@phoeenniixx,
@Pinaka07,
@PranavBhatP,
@sanskarmodi8,
@Sohaib-Ahmed21,
@szepeviktor
@Vishnu-Rangiah,
@zju-ys
Full Changelog: v1.4.0...v1.5.0
v1.4.0
What's Changed
Feature and maintenance update.
New Contributors
- @gbilleyPeco made their first contribution in #1750
- @pietsjoh made their first contribution in #1399
- @MartinoMensio made their first contribution in #1579
- @phoeenniixx made their first contribution in #1811
- @cngmid made their first contribution in #1827
- @Marcrb2 made their first contribution in #1518
- @jobs-git made their first contribution in #1864
All Contributors
@agobbifbk,
@Borda,
@cngmid,
@fkiraly,
@fnhirwa,
@gbilleyPeco,
@jobs-git,
@Marcrb2,
@MartinoMensio,
@phoeenniixx,
@pietsjoh,
@PranavBhatP
Full Changelog: v1.3.0...v1.4.0
v1.3.0
What's Changed
Feature and maintenance update.
python 3.13supporttidemodel- bugfixes for TFT
New Contributors
- @xiaokongkong made their first contribution in #1719
- @madprogramer made their first contribution in #1720
- @julian-fong made their first contribution in #1705
- @Sohaib-Ahmed21 made their first contribution in #1734
- @d-schmitt made their first contribution in #1580
- @Luke-Chesley made their first contribution in #1516
- @PranavBhatP made their first contribution in #1762
All Contributors
@d-schmitt,
@fkiraly,
@fnhirwa,
@julian-fong,
@Luke-Chesley,
@madprogramer,
@PranavBhatP,
@Sohaib-Ahmed21,
@xiaokongkong,
@XinyuWuu
Full Changelog: v1.2.0...v1.3.0
v1.2.0
What's Changed
Maintenance update, minor feature additions and bugfixes.
- support for
numpy 2.X - end of life for
python 3.8 - fixed documentation build
- bugfixes
New Contributors
- @ewth made their first contribution in #1696
- @airookie17 made their first contribution in #1692
- @benHeid made their first contribution in #1704
- @eugenio-mercuriali made their first contribution in #1699
All Contributors
@airookie17,
@benHeid,
@eugenio-mercuriali,
@ewth,
@fkiraly,
@fnhirwa,
@XinyuWuu,
@yarnabrina
Full Changelog: v1.1.1...v1.2.0
v1.1.1
v1.1.0
What's Changed
Maintenance update widening compatibility ranges and consolidating dependencies:
- support for python 3.11 and 3.12, added CI testing
- support for MacOS, added CI testing
- core dependencies have been minimized to
numpy,torch,lightning,scipy,pandas, andscikit-learn. - soft dependencies are available in soft dependency sets:
all_extrasfor all soft dependencies, andtuningforoptunabased optimization.
Dependency changes
- the following are no longer core dependencies and have been changed to optional dependencies :
optuna,statsmodels,pytorch-optimize,matplotlib. Environments relying on functionality requiring these dependencies need to be updated to install these explicitly. optunabounds have been updated tooptuna >=3.1.0,<4.0.0optuna-integrateis now an additional soft dependency, in case ofoptuna >=3.3.0
Deprecations and removals
- from 1.2.0, the default optimizer will be changed from
"ranger"to"adam"to avoid non-torchdependencies in defaults.pytorch-optimizeoptimizers can still be used. Users should set the optimizer explicitly to continue using"ranger". - from 1.1.0, the loggers do not log figures if soft dependency
matplotlibis not present, but will raise no exceptions in this case. To log figures, ensure thatmatplotlibis installed.
All Contributors
@andre-marcos-perez,
@avirsaha,
@bendavidsteel,
@benHeid,
@bohdan-safoniuk,
@Borda,
@CahidArda,
@fkiraly,
@fnhirwa,
@germanKoch,
@jacktang,
@jdb78,
@jurgispods,
@maartensukel,
@MBelniak,
@orangehe,
@pavelzw,
@sfalkena,
@tmct,
@XinyuWuu,
@yarnabrina,
New Contributors
- @jurgispods made their first contribution in #1366
- @jacktang made their first contribution in #1353
- @andre-marcos-perez made their first contribution in #1346
- @tmct made their first contribution in #1340
- @bohdan-safoniuk made their first contribution in #1318
- @MBelniak made their first contribution in #1230
- @CahidArda made their first contribution in #1175
- @bendavidsteel made their first contribution in #1359
- @Borda made their first contribution in #1498
- @fkiraly made their first contribution in #1598
- @XinyuWuu made their first contribution in #1599
- @pavelzw made their first contribution in #1407
- @yarnabrina made their first contribution in #1630
- @fnhirwa made their first contribution in #1646
- @avirsaha made their first contribution in #1649
Full Changelog: v1.0.0...v1.1.0
Update to pytorch 2.0
Breaking Changes
- Upgraded to pytorch 2.0 and lightning 2.0. This brings a couple of changes, such as configuration of trainers. See the lightning upgrade guide. For PyTorch Forecasting, this particularly means if you are developing own models, the class method
epoch_endhas been renamed toon_epoch_endand replacingmodel.summarize()withModelSummary(model, max_depth=-1)andTuner(trainer)is its own class, sotrainer.tunerneeds replacing. (#1280) - Changed the
predict()interface returning named tuple - see tutorials.
Changes
- The predict method is now using the lightning predict functionality and allows writing results to disk (#1280).
Fixed
- Fixed robust scaler when quantiles are 0.0, and 1.0, i.e. minimum and maximum (#1142)
Poetry update
Multivariate networks
Added
- DeepVar network (#923)
- Enable quantile loss for N-HiTS (#926)
- MQF2 loss (multivariate quantile loss) (#949)
- Non-causal attention for TFT (#949)
- Tweedie loss (#949)
- ImplicitQuantileNetworkDistributionLoss (#995)
Fixed
- Fix learning scale schedule (#912)
- Fix TFT list/tuple issue at interpretation (#924)
- Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (#949)
- Fix Aggregation and CompositeMetric resets (#949)
Changed
- Dropping Python 3.6 suppport, adding 3.10 support (#479)
- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)
- Changed transformation format for Encoders to dict from tuple (#949)
Contributors
- jdb78