Version 0.6.0
[0.6.0] - 2019-07-19
This release introduces convenience features such as SliceDataset
which makes using torch datasets (e.g. from torchvision) easier in combination with sklearn features such as GridSearchCV
. There was also some work to make the transition from CUDA trained models to CPU smoother and learning rate schedulers were upgraded to use torch builtin functionality.
Here's the full list of changes:
Added
- Adds FAQ entry regarding the initialization behavior of
NeuralNet
when passed instantiated models. (#409) - Added CUDA pickle test including an artifact that supports testing on CUDA-less CI machines
- Adds
train_batch_count
andvalid_batch_count
to history in training loop. (#445) - Adds score method for
NeuralNetClassifier
,NeuralNetBinaryClassifier
, andNeuralNetRegressor
(#469) - Wrapper class for torch Datasets to make them work with some sklearn features (e.g. grid search). (#443)
Changed
- Repository moved to https://github.com/skorch-dev/skorch/, please change your git remotes
- Treat cuda dependent attributes as prefix to cover values set using
set_params
since
previously"criterion_"
would not matchnet.criterion__weight
as set by
net.set_params(criterion__weight=w)
- skorch pickle format changed in order to improve CUDA compatibility, if you have pickled models, please re-pickle them to be able to load them in the future
net.criterion_
and its parameters are now moved to target device when using criteria that inherit fromtorch.nn.Module
. Previously the user had to make sure that parameters such as class weight are on the compute device- skorch now assumes PyTorch >= 1.1.0. This mainly affects learning rate schedulers, whose inner workings have been changed with version 1.1.0. This update will also invalidate pickled skorch models after a change introduced in PyTorch optimizers.
Fixed
- Include requirements in MANIFEST.in
- Add
criterion_
toNeuralNet.cuda_dependent_attributes_
to avoid issues with criterion
weight tensors from, e.g.,NLLLoss
(#426) TrainEndCheckpoint
can be cloned bysklearn.base.clone
. (#459)
Thanks to all the contributors:
- Bram Vanroy
- Damien Lancry
- Ethan Rosenthal
- Sergey Alexandrov
- Thomas Fan
- Zayd Hammoudeh