You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Better user-facing messages when module or optimizer are re-initialized
Added an experimental API (net._register_virtual_param) to register "virtual"
parameters on the network with custom setter functions. (#369)
Setting parameters lr, momentum, optimizer__lr, etc. no longer resets
the optmizer. As of now you can do net.set_params(lr=0.03) or net.set_params(optimizer__param_group__0__momentum=0.86) without triggering
a re-initialization of the optimizer (#369)
Support for scipy sparse CSR matrices as input (as, e.g., returned by sklearn's CountVectorizer); note that they are cast to dense matrices during batching
Helper functions to build command line interfaces with almost no
boilerplate, example that shows usage
Changed
Reduce overhead of BatchScoring when using train_loss_score or valid_loss_score by skipping superfluous inference step (#381)
The on_grad_computed callback function will yield an iterable for named_parameters only when it is used to reduce the run-time overhead of the call (#379)
Default fn_prefix in TrainEndCheckpoint is now train_end_ (#391)
Issues a warning when Checkpoints's monitor parameter is set to monitor and the history contains <monitor>_best. (#399)
Fixed
Re-initialize optimizer when set_params is called with lr argument (#372)
Copying a SliceDict now returns a SliceDict instead of a dict (#388)
Calling == on SliceDicts now works as expected when values are numpy arrays and torch tensors