@@ -48,7 +48,7 @@ on_train_begin(net, X, y)
48
48
^^^^^^^^^^^^^^^^^^^^^^^^^
49
49
50
50
Called once at the start of the training process (e.g. when calling
51
- fit).
51
+ `` fit `` ).
52
52
53
53
on_train_end(net, X, y)
54
54
^^^^^^^^^^^^^^^^^^^^^^^
@@ -74,7 +74,6 @@ Called once before each batch of data is processed, i.e. possibly
74
74
several times per epoch. Gets batch data as additional input.
75
75
Also includes a bool indicating if this is a training batch or not.
76
76
77
-
78
77
on_batch_end(net, batch, training, loss, y_pred)
79
78
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
80
79
@@ -89,19 +88,18 @@ update step was performed. Gets the module parameters as additional
89
88
input as well as the batch data. Useful if you want to tinker with
90
89
gradients.
91
90
92
-
93
91
Setting callback parameters
94
92
---------------------------
95
93
96
94
You can set specific callback parameters using the ususal `set_params `
97
95
interface on the network by using the `callbacks__ ` prefix and the
98
- callback's name. For example to change the scoring order of the train
99
- loss you can write this :
96
+ callback's name. For example to change the name of the accuracy of the
97
+ validation set shown during training, you would do :
100
98
101
99
.. code :: python
102
100
103
- net = NeuralNet( )
104
- net.set_params(callbacks__train_loss__lower_is_better = False )
101
+ net = NeuralNetClassifier( ... )
102
+ net.set_params(callbacks__valid_acc__name = " accuracy of valid set " )
105
103
106
104
Changes will be applied on initialization and callbacks that
107
105
are changed using `set_params ` will be re-initialized.
@@ -112,7 +110,6 @@ If there is a conflict, the conflicting names will be made unique
112
110
by appending a count suffix starting at 1, e.g.
113
111
``EpochScoring_1 ``, ``EpochScoring_2 ``, etc.
114
112
115
-
116
113
Deactivating callbacks
117
114
-----------------------
118
115
@@ -141,7 +138,6 @@ compare the performance once with and once without the callback.
141
138
To completely disable all callbacks, including default callbacks,
142
139
set ``callbacks="disable" ``.
143
140
144
-
145
141
Scoring
146
142
-------
147
143
@@ -171,11 +167,12 @@ are unfamiliar, here is a short explanation:
171
167
172
168
- If you pass a string, sklearn makes a look-up for a score with
173
169
that name. Examples would be ``'f1' `` and ``'roc_auc' ``.
174
- - If you pass ``None ``, the model's ``score `` method is used. By
175
- default, :class: `.NeuralNet ` and its subclasses don't provide a
176
- ``score `` method, but you can easily implement your own. If you do,
177
- it should take ``X `` and ``y `` (the target) as input and return a
178
- scalar as output.
170
+ - If you pass ``None ``, the model's ``score `` method is used. By default,
171
+ :class: `.NeuralNet ` doesn't provide a ``score `` method, but you can easily
172
+ implement your own by subclassing it. If you do, it should take ``X `` and
173
+ ``y `` (the target) as input and return a scalar as output.
174
+ :class: `.NeuralNetClassifier ` and :class: `.NeuralNetRegressor ` have the
175
+ same score methods as normal sklearn classifiers and regressors.
179
176
- Finally, you can pass a function/callable. In that case, this
180
177
function should have the signature ``func(net, X, y) `` and return a
181
178
scalar.
@@ -192,9 +189,8 @@ called ``'f1'``, you should set ``lower_is_better=False``. The
192
189
score itself, and an entry for ``'f1_best' ``, which says whether this
193
190
is the as of yet best f1 score.
194
191
195
- ``on_train `` is used to indicate whether training or validation data
196
- should be used to determine the score. By default, it is set to
197
- validation.
192
+ ``on_train `` is a bool that is used to indicate whether training or validation
193
+ data should be used to determine the score. By default, it is set to validation.
198
194
199
195
Finally, you may have to provide your own ``target_extractor ``. This
200
196
should be a function or callable that is applied to the target before
@@ -208,19 +204,21 @@ calculate any new scores. Instead it uses an existing score that is
208
204
calculated for each batch (the train loss, for example) and determines
209
205
the average of this score, which is then written to the epoch level of
210
206
the net's ``history ``. This is very useful if the score was already
211
- calculated and logged on the batch level and you're only interested to
207
+ calculated and logged on the batch level and you're interested to
212
208
see the averaged score on the epoch level.
213
209
214
210
For this callback, you only need to provide the ``name `` of the score
215
211
in the ``history ``. Moreover, you may again specify if
216
212
``lower_is_better `` and if the score should be calculated ``on_train ``
217
213
or not.
218
214
219
- .. note :: Both :class:`.BatchScoring` and :class:`.PassthroughScoring`
220
- honor the batch size when calculating the average. This can
221
- make a difference when not all batch sizes are equal, which
222
- is typically the case because the last batch of an epoch
223
- contains fewer samples than the rest.
215
+ .. note ::
216
+
217
+ Both :class: `.BatchScoring ` and :class: `.PassthroughScoring `
218
+ honor the batch size when calculating the average. This can
219
+ make a difference when not all batch sizes are equal, which
220
+ is typically the case because the last batch of an epoch
221
+ contains fewer samples than the rest.
224
222
225
223
226
224
Checkpoint
@@ -261,7 +259,7 @@ Learning rate schedulers
261
259
The :class: `.LRScheduler ` callback allows the use of the various
262
260
learning rate schedulers defined in :mod: `torch.optim.lr_scheduler `,
263
261
such as :class: `~torch.optim.lr_scheduler.ReduceLROnPlateau `, which
264
- allows dynamic learning rate reducing based on a given value to
262
+ allows dynamic learning rate reduction based on a given value to
265
263
monitor, or :class: `~torch.optim.lr_scheduler.CyclicLR `, which cycles
266
264
the learning rate between two boundaries with a constant frequency.
267
265
0 commit comments