Skip to content

Version 0.4.0

Compare
Choose a tag to compare
@ottonemo ottonemo released this 24 Oct 13:52
· 419 commits to master since this release

Organisational

From now on we will organize a change log and document every change directly. If
you are a contributor we encourage you to document your changes directly in the
change log when submitting a PR to reduce friction when preparing new releases.

Added

  • Support for PyTorch 0.4.1
  • There is no need to explicitly name callbacks anymore (names are assigned automatically, name conflicts are resolved).
  • You can now access the training data in the on_grad_computed event
  • There is a new image segmentation example
  • Easily create toy network instances for quick experiments using skorch.toy.make_classifier and friends
  • New ParamMapper callback to modify/freeze/unfreeze parameters at certain point in time during training:
>>> from sklearn.callbacks import Freezer, Unfreezer
>>> net = Net(module, callbacks=[Freezer('layer*.weight'), Unfreezer('layer*.weight', at=10)])
  • Refactored EpochScoring for easier sub-classing
  • Checkpoint callback now supports saving the optimizer, this avoids problems with stateful
    optimizers such as Adam or RMSprop (#360)
  • Added LoadInitState callback for easy continued training from checkpoints (#360)
  • NeuralNetwork.load_params now supports loading from Checkpoint instances
  • Added documentation for saving and loading highlighting the new features

Changed

  • The ProgressBar callback now determines the batches per epoch automatically by default (batches_per_epoch=auto)
  • The on_grad_computed event now has access to the current training data batch

Deprecated

  • Deprecated filtered_optimizer in favor of Freezer callback (#346)
  • NeuralNet.load_params and NeuralNet.save_params deprecate f parameter for the sake
    of f_optimizer, f_params and f_history (#360)

Removed

  • skorch.net.NeuralNetClassifier and skorch.net.NeuralNetRegressor are removed.
    Use from skorch import NeuralNetClassifier or skorch.NeuralNetClassifier instead.

Fixed

  • uses_placeholder_y should not require existence of y field (#311)
  • LR scheduler creates batch_idx on first run (#314)
  • Use OrderedDict for callbacks to fix python 3.5 compatibility issues (#331)
  • Make to_tensor work correctly with PackedSequence (#335)
  • Rewrite History to not use any recursion to avoid memory leaks during exceptions (#312)
  • Use flaky in some neural network tests to hide platform differences
  • Fixes ReduceLROnPlateau when mode == max (#363)
  • Fix disconnected weights between net and optimizer after copying the net with copy.deepcopy (#318)
  • Fix a bug that interfered with loading CUDA models when the model was a CUDA tensor but
    the net was configured to use the CPU (#354, #358)

Contributors

Again we'd like to thank all the contributors for their awesome work.
Thank you

  • Andrew Spott
  • Dave Hirschfeld
  • Scott Sievert
  • Sergey Alexandrov
  • Thomas Fan