Releases: mirkobunse/qunfold
v0.1.4
v0.1.3
v0.1.2
New methods: KMM with EnergyKernelTransformer, GaussianKernelTransformer, LaplacianKernelTransformer, and GaussianRFFKernelTransformer, as proposed by Dussap et al. (2023).
Improved performance: by default, transformers return a sample average instead of item-wise representation that are averaged later. This default boosts performance due to an improved vectorization of many computations. Moreover, the HellingerSurrogateLoss is implemented in a more effective way.
More robust API: during fitting, one can specify the expected number of classes to handle rare cases of missing classes in experiments.
v0.1.0
This first version of qunfold implements many quantification and unfolding methods, provides the desired composability, is comprehensively documentated, and is evaluated through experiments.
The implemented methods are:
- ACC
- PACC
- EDx
- EDy
- HDx
- HDy
- RUN
From their components, lots of new methods can be composed, e.g., ordinal variants of the existing methods.