Releases: FluxML/FluxTraining.jl
Releases · FluxML/FluxTraining.jl
v0.3.0
v0.2.4
FluxTraining v0.2.4
Closed issues:
- Allow restricting phases during which a
Metric
runs (#84)
Merged pull requests:
- CompatHelper: bump compat for EarlyStopping to 0.2, (keep existing compat) (#92) (@github-actions[bot])
- CompatHelper: bump compat for EarlyStopping to 0.3, (keep existing compat) (#94) (@github-actions[bot])
- doc: fix link to TensorBoardLogger.jl (#95) (@visr)
- Move documentation system to Pollen.jl (#96) (@lorenzoh)
- Use ReTest.jl to run tests (#97) (@lorenzoh)
- Add and improve a lot of docstrings and a few test cases (#99) (@lorenzoh)
- Add
phase
argument toMetric
(#100) (@lorenzoh) - CompatHelper: add new compat entry for InlineTest at version 0.2, (keep existing compat) (#101) (@github-actions[bot])
- (Documentation) Document
Learner
components (#102) (@lorenzoh) - Add Flux 0.13 compatibility (#103) (@lorenzoh)
v0.2.3
v0.2.2
FluxTraining v0.2.2
Merged pull requests:
v0.2.1
FluxTraining v0.2.1
Merged pull requests:
v0.2.0
FluxTraining v0.2.0
Added
- New training loop API that is easier to extend. Defining a
Phase
andstep!
is all you need. See the new tutorial and the new reference. - Added
CHANGELOG.md
(this file) AbstractValidationPhase
as supertype for validation phases- Documentation for callback helpers on reference page
Changed
Batch*
renamed toStep*
:- events:
BatchBegin
nowStepBegin
,BatchEnd
nowStepEnd
CancelBatchException
nowCancelStepException
.- field
Learner.batch
nowLearner.step
- events:
Learner.step/batch
is no longer a specialstruct
but now aPropDict
, allowing you to set arbitrary fields.Learner.model
can now be aNamedTuple/Tuple
of models for use in custom training loops. Likewise,learner.params
now resembles the structure oflearner.model
, allowing separate access to parameters of different models.- Callbacks
- Added
init!
method for callback initilization, replacing theInit
event which required aPhase
to implement. Scheduler
now has internal step counter and no longer relies onRecorder
's history. This makes it easier to replace the scheduler without needing to offset the new schedules.EarlyStopping
callback now uses criteria from EarlyStopping.jl
- Added
Removed
- Removed old training API. Methods
fitbatch!
,fitbatchphase!
,fitepoch!
,fitepochphase!
have all been removed.
Closed issues:
- Scheduler applies schedules per batch by default (#68)
Recorder
does not work with models with non-Array
inputs. (#80)
Merged pull requests:
- CompatHelper: bump compat for "BSON" to "0.3" (#69) (@github-actions[bot])
- use EarlyStopping.jl for stopping criteria (#72) (@lorenzoh)
- CompatHelper: bump compat for "PrettyTables" to "0.12" (#73) (@github-actions[bot])
- Move documentation to Pollen.jl (#77) (@lorenzoh)
- Revert
onecycle
(#78) (@lorenzoh) - Remove
samples
field fromHistory
(#81) (@lorenzoh) - New training API and QoL improvements (v0.2.0) (#83) (@lorenzoh)
v0.1.3
v0.1.2
FluxTraining v0.1.2
Merged pull requests:
- small doc fix (#62) (@CarloLucibello)
- CompatHelper: bump compat for "PrettyTables" to "0.11" (#63) (@github-actions[bot])
- Callback utilities (#64) (@lorenzoh)
v0.1.1
FluxTraining v0.1.1
Closed issues:
- Link to Loss broken (#40)
- Early stopping link broken (#41)
- BatchEnd link broken (#42)
- Break out Schedule (#44)
- How to verify GPU is working? (#47)
- What is the role of the test data? (#48)
- What do you think of Data Modules? (#53)
Merged pull requests:
- Add testing as CI step (#36) (@lorenzoh)
- Better docstrings (#37) (@lorenzoh)
- Typo (#43) (@drozzy)
- WRONG order of arguments to the Learner. (#45) (@drozzy)
- Fix issue #42 - missing docstrings (#49) (@lorenzoh)
- add EarlyStopping docstring (#50) (@lorenzoh)
- Update callback reference section on metrics (#51) (@lorenzoh)
- Remove no longer needed dependencies from Project.toml (#52) (@lorenzoh)
- Add SanityCheck callback (#56) (@lorenzoh)
- Fix GarbageCollect callback (#57) (@lorenzoh)
- Fix TensorBoard image serialization (#58) (@lorenzoh)
- Sanitycheck (#60) (@lorenzoh)
- update compat bounds (#61) (@lorenzoh)