Releases: baal-org/baal
Releases · baal-org/baal
v1.4.0
What's Changed
- Support arrowdataset by @parmidaatg in #142
- Give ability for users to get uncertainty values. by @Dref360 in #144
- #146 Fix issue where at most a single submodule was affected by Dropout by @Dref360 in #147
- #131 Use poetry instead of setup.py by @Dref360 in #148
- #145 Example using MLP on MNIST by @Dref360 in #150
- mlp regression experiment by @parmidaatg in #152
- #130 Add mypy and step to test imports by @Dref360 in #155
Full Changelog: v1.3.1...1.4.0
v1.3.1
v1.3.0
BaaL 1.3.0 is a release focused on UX.
Features
- Initial support for HF Trainer along with tutorials to use HuggingFace.
- Initial support for Semi-supervised learning, we are eager to see what the community will do with such a powerful tool!
- Fixes in ECE computation
Documentation
The biggest change in this release is the new website along with tons of content.
BaaL v1.2.1
Changelogs
Features
- Initial support for ensembles. Example to come.
- Initial support for Pytorch Lightning. Example here.
Bugfixes
- Fix BALD for binary classification
- Fix Random heuristic for generators
- Fix
to_cuda
for strings. - Fix a bug where MCDropconnect would not work with DataParallel
Misc
- Warning when no layers are affected by
patch_layers
in MCDropout, MCDropconnect.
v1.2.0
Changelist for v1.2.0
- Add DirichletCalibration (Kull et al. 2019), see our blog post.
- Add ECE Metrics for computing model's calibration.
- Add support for Multi-Input/Output for ModelWrapper
- Fix BatchBALD to be consistent with the official implementation
- Add ConsistentDropout, where the masks used in MC-Dropout are the same for each input.
Important notes
- BaaL is now part of Pytorch Ecosystem!
V1.1.0
BaaL v1.1 release notes
Changelog
- Support for MC-Dropconnect (Mobiny, 2019)
ActiveLearningDataset
now has better support for attributes specifics to the pool (see below).- More flexible support multi-inputs/outputs in
ModelWrapper
.- Can support list of inputs or outputs.
- QoL features on
ActiveLearningDataset
- Can use a RandomState and add
load_state_dict
.
- Can use a RandomState and add
- Add
replicate_in_memory
flag toModelWrapper
.- If False, the MC iterations are done in a for-loop instead of making a batch in memory.
- (This means
predict_on_batch
would not take up more memory than e.g.test_on_batch
)
- Add
patience
andmin_epoch_for_es
toModelWrapper.train_and_test_on_datasets
.- Allows early stopping.
- New tutorial on how to use BaaL with scikit-learn.
- Can now combine heuristics for multi-outputs models (see baal.active.heuristics.Combine).
- Fix documentation
New ActiveLearningDataset
To better support new tasks, ActiveLearningDataset
can now support any attributes to be overrided when the pool is created.
Example:
from PIL import Image
from torch.utils.data import Dataset
from torchvision.transforms import Compose, ToTensor, RandomHorizontalFlip
from baal.active.dataset import ActiveLearningDataset
class MyDataset(Dataset):
def __init__(self):
self.my_tansforms = Compose([RandomHorizontalFlip(), ToTensor()])
def __len__(self):
return 10
def __getitem__(self, idx):
x = Image.open('an_image.png')
return self.my_tansforms(x)
al_dataset = ActiveLearningDataset(MyDataset(),
pool_specifics={
'my_tansforms': ToTensor()
})
# Now `pool.my_tansforms = ToTensor()`
pool = al_dataset.pool