v0.3.0
Features
- significantly reduced overhead of skorch over pytorch for small/medium loads
- predefined splits are easier to use (
skorch.helper.predefined_split
) - freezing layers is now easier with
skorch.helper.filtered_optimizer
- introduce
NeuralNetBinaryClassifier
- introduce early stopping callback
- support parallel grid search using Dask
- support for LBFGS
- history can be saved/loaded independently
- learning rate scheduler have a method to simulate behavior
Checkpoint
callback supports pickling and history savingCheckpoint
callback is less noisy- added transfer learning tutorial
- added tutorial how to expose skorch via REST API
- improved documentation
API changes
train_step
is now split intrain_step
andtrain_step_single
in order to support LBFGS, wheretrain_step_single
takes the role of your typical training inner-loop when writing PyTorch modelsdevice
parameter onskorch.dataset.Dataset
is now deprecatedCheckpoint
parametertarget
is deprecated in favor off_params
Contributors
A big thanks to our contributors who helped making this release possible:
- Andrew Spott
- Scott Sievert
- Sergey Alexandrov
- Thomas Fan
- Tomasz Pietruszka