Version 0.4.0
Organisational
From now on we will organize a change log and document every change directly. If
you are a contributor we encourage you to document your changes directly in the
change log when submitting a PR to reduce friction when preparing new releases.
Added
- Support for PyTorch 0.4.1
- There is no need to explicitly name callbacks anymore (names are assigned automatically, name conflicts are resolved).
- You can now access the training data in the
on_grad_computed
event - There is a new image segmentation example
- Easily create toy network instances for quick experiments using
skorch.toy.make_classifier
and friends - New
ParamMapper
callback to modify/freeze/unfreeze parameters at certain point in time during training:
>>> from sklearn.callbacks import Freezer, Unfreezer
>>> net = Net(module, callbacks=[Freezer('layer*.weight'), Unfreezer('layer*.weight', at=10)])
- Refactored
EpochScoring
for easier sub-classing Checkpoint
callback now supports saving the optimizer, this avoids problems with stateful
optimizers such asAdam
orRMSprop
(#360)- Added
LoadInitState
callback for easy continued training from checkpoints (#360) NeuralNetwork.load_params
now supports loading fromCheckpoint
instances- Added documentation for saving and loading highlighting the new features
Changed
- The
ProgressBar
callback now determines the batches per epoch automatically by default (batches_per_epoch=auto
) - The
on_grad_computed
event now has access to the current training data batch
Deprecated
- Deprecated
filtered_optimizer
in favor ofFreezer
callback (#346) NeuralNet.load_params
andNeuralNet.save_params
deprecatef
parameter for the sake
off_optimizer
,f_params
andf_history
(#360)
Removed
skorch.net.NeuralNetClassifier
andskorch.net.NeuralNetRegressor
are removed.
Usefrom skorch import NeuralNetClassifier
orskorch.NeuralNetClassifier
instead.
Fixed
uses_placeholder_y
should not require existence ofy
field (#311)- LR scheduler creates
batch_idx
on first run (#314) - Use
OrderedDict
for callbacks to fix python 3.5 compatibility issues (#331) - Make
to_tensor
work correctly withPackedSequence
(#335) - Rewrite
History
to not use any recursion to avoid memory leaks during exceptions (#312) - Use
flaky
in some neural network tests to hide platform differences - Fixes ReduceLROnPlateau when mode == max (#363)
- Fix disconnected weights between net and optimizer after copying the net with
copy.deepcopy
(#318) - Fix a bug that interfered with loading CUDA models when the model was a CUDA tensor but
the net was configured to use the CPU (#354, #358)
Contributors
Again we'd like to thank all the contributors for their awesome work.
Thank you
- Andrew Spott
- Dave Hirschfeld
- Scott Sievert
- Sergey Alexandrov
- Thomas Fan