Skip to content

Commit 877938f

Browse files
DOC Update the documentation in several places (#909)
* Update the documentation in several places I went through what I thought would be the most commonly visited parts of our documentation and made improvements when I saw the opportunity. Mostly, this comes down to wording, adding information about changed/new features, adding links or fixing broken ones, etc., nothing major. * Reviewer comment: better doc Co-authored-by: ottonemo <[email protected]> Co-authored-by: ottonemo <[email protected]>
1 parent 647702b commit 877938f

File tree

8 files changed

+97
-86
lines changed

8 files changed

+97
-86
lines changed

README.rst

+5-9
Original file line numberDiff line numberDiff line change
@@ -50,17 +50,15 @@ To see more elaborate examples, look `here
5050
import numpy as np
5151
from sklearn.datasets import make_classification
5252
from torch import nn
53-
5453
from skorch import NeuralNetClassifier
5554
56-
5755
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
5856
X = X.astype(np.float32)
5957
y = y.astype(np.int64)
6058
6159
class MyModule(nn.Module):
6260
def __init__(self, num_units=10, nonlin=nn.ReLU()):
63-
super(MyModule, self).__init__()
61+
super().__init__()
6462
6563
self.dense0 = nn.Linear(20, num_units)
6664
self.nonlin = nonlin
@@ -76,7 +74,6 @@ To see more elaborate examples, look `here
7674
X = self.softmax(self.output(X))
7775
return X
7876
79-
8077
net = NeuralNetClassifier(
8178
MyModule,
8279
max_epochs=10,
@@ -95,7 +92,6 @@ In an `sklearn Pipeline <https://scikit-learn.org/stable/modules/generated/sklea
9592
from sklearn.pipeline import Pipeline
9693
from sklearn.preprocessing import StandardScaler
9794
98-
9995
pipe = Pipeline([
10096
('scale', StandardScaler()),
10197
('net', net),
@@ -110,7 +106,6 @@ With `grid search <https://scikit-learn.org/stable/modules/generated/sklearn.mod
110106
111107
from sklearn.model_selection import GridSearchCV
112108
113-
114109
# deactivate skorch-internal train-valid split and verbose logging
115110
net.set_params(train_split=False, verbose=0)
116111
params = {
@@ -134,12 +129,13 @@ skorch also provides many convenient features, among others:
134129
- `Progress bar <https://skorch.readthedocs.io/en/stable/callbacks.html#skorch.callbacks.ProgressBar>`_ (for CLI as well as jupyter)
135130
- `Automatic inference of CLI parameters <https://github.com/skorch-dev/skorch/tree/master/examples/cli>`_
136131
- `Integration with GPyTorch for Gaussian Processes <https://skorch.readthedocs.io/en/latest/user/probabilistic.html>`_
132+
- `Integration with Hugging Face 🤗 <https://skorch.readthedocs.io/en/stable/user/huggingface.html>`_
137133

138134
============
139135
Installation
140136
============
141137

142-
skorch requires Python 3.6 or higher.
138+
skorch requires Python 3.7 or higher.
143139

144140
conda installation
145141
==================
@@ -187,7 +183,7 @@ To install skorch from source using conda, proceed as follows:
187183
git clone https://github.com/skorch-dev/skorch.git
188184
cd skorch
189185
conda env create
190-
source activate skorch
186+
conda activate skorch
191187
python -m pip install .
192188
193189
If you want to help developing, run:
@@ -197,7 +193,7 @@ If you want to help developing, run:
197193
git clone https://github.com/skorch-dev/skorch.git
198194
cd skorch
199195
conda env create
200-
source activate skorch
196+
conda activate skorch
201197
python -m pip install -e .
202198
203199
py.test # unit tests

docs/user/callbacks.rst

+22-24
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ on_train_begin(net, X, y)
4848
^^^^^^^^^^^^^^^^^^^^^^^^^
4949

5050
Called once at the start of the training process (e.g. when calling
51-
fit).
51+
``fit``).
5252

5353
on_train_end(net, X, y)
5454
^^^^^^^^^^^^^^^^^^^^^^^
@@ -74,7 +74,6 @@ Called once before each batch of data is processed, i.e. possibly
7474
several times per epoch. Gets batch data as additional input.
7575
Also includes a bool indicating if this is a training batch or not.
7676

77-
7877
on_batch_end(net, batch, training, loss, y_pred)
7978
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
8079

@@ -89,19 +88,18 @@ update step was performed. Gets the module parameters as additional
8988
input as well as the batch data. Useful if you want to tinker with
9089
gradients.
9190

92-
9391
Setting callback parameters
9492
---------------------------
9593

9694
You can set specific callback parameters using the ususal `set_params`
9795
interface on the network by using the `callbacks__` prefix and the
98-
callback's name. For example to change the scoring order of the train
99-
loss you can write this:
96+
callback's name. For example to change the name of the accuracy of the
97+
validation set shown during training, you would do:
10098

10199
.. code:: python
102100
103-
net = NeuralNet()
104-
net.set_params(callbacks__train_loss__lower_is_better=False)
101+
net = NeuralNetClassifier(...)
102+
net.set_params(callbacks__valid_acc__name="accuracy of valid set")
105103
106104
Changes will be applied on initialization and callbacks that
107105
are changed using `set_params` will be re-initialized.
@@ -112,7 +110,6 @@ If there is a conflict, the conflicting names will be made unique
112110
by appending a count suffix starting at 1, e.g.
113111
``EpochScoring_1``, ``EpochScoring_2``, etc.
114112

115-
116113
Deactivating callbacks
117114
-----------------------
118115

@@ -141,7 +138,6 @@ compare the performance once with and once without the callback.
141138
To completely disable all callbacks, including default callbacks,
142139
set ``callbacks="disable"``.
143140

144-
145141
Scoring
146142
-------
147143

@@ -171,11 +167,12 @@ are unfamiliar, here is a short explanation:
171167

172168
- If you pass a string, sklearn makes a look-up for a score with
173169
that name. Examples would be ``'f1'`` and ``'roc_auc'``.
174-
- If you pass ``None``, the model's ``score`` method is used. By
175-
default, :class:`.NeuralNet` and its subclasses don't provide a
176-
``score`` method, but you can easily implement your own. If you do,
177-
it should take ``X`` and ``y`` (the target) as input and return a
178-
scalar as output.
170+
- If you pass ``None``, the model's ``score`` method is used. By default,
171+
:class:`.NeuralNet` doesn't provide a ``score`` method, but you can easily
172+
implement your own by subclassing it. If you do, it should take ``X`` and
173+
``y`` (the target) as input and return a scalar as output.
174+
:class:`.NeuralNetClassifier` and :class:`.NeuralNetRegressor` have the
175+
same score methods as normal sklearn classifiers and regressors.
179176
- Finally, you can pass a function/callable. In that case, this
180177
function should have the signature ``func(net, X, y)`` and return a
181178
scalar.
@@ -192,9 +189,8 @@ called ``'f1'``, you should set ``lower_is_better=False``. The
192189
score itself, and an entry for ``'f1_best'``, which says whether this
193190
is the as of yet best f1 score.
194191

195-
``on_train`` is used to indicate whether training or validation data
196-
should be used to determine the score. By default, it is set to
197-
validation.
192+
``on_train`` is a bool that is used to indicate whether training or validation
193+
data should be used to determine the score. By default, it is set to validation.
198194

199195
Finally, you may have to provide your own ``target_extractor``. This
200196
should be a function or callable that is applied to the target before
@@ -208,19 +204,21 @@ calculate any new scores. Instead it uses an existing score that is
208204
calculated for each batch (the train loss, for example) and determines
209205
the average of this score, which is then written to the epoch level of
210206
the net's ``history``. This is very useful if the score was already
211-
calculated and logged on the batch level and you're only interested to
207+
calculated and logged on the batch level and you're interested to
212208
see the averaged score on the epoch level.
213209

214210
For this callback, you only need to provide the ``name`` of the score
215211
in the ``history``. Moreover, you may again specify if
216212
``lower_is_better`` and if the score should be calculated ``on_train``
217213
or not.
218214

219-
.. note:: Both :class:`.BatchScoring` and :class:`.PassthroughScoring`
220-
honor the batch size when calculating the average. This can
221-
make a difference when not all batch sizes are equal, which
222-
is typically the case because the last batch of an epoch
223-
contains fewer samples than the rest.
215+
.. note::
216+
217+
Both :class:`.BatchScoring` and :class:`.PassthroughScoring`
218+
honor the batch size when calculating the average. This can
219+
make a difference when not all batch sizes are equal, which
220+
is typically the case because the last batch of an epoch
221+
contains fewer samples than the rest.
224222

225223

226224
Checkpoint
@@ -261,7 +259,7 @@ Learning rate schedulers
261259
The :class:`.LRScheduler` callback allows the use of the various
262260
learning rate schedulers defined in :mod:`torch.optim.lr_scheduler`,
263261
such as :class:`~torch.optim.lr_scheduler.ReduceLROnPlateau`, which
264-
allows dynamic learning rate reducing based on a given value to
262+
allows dynamic learning rate reduction based on a given value to
265263
monitor, or :class:`~torch.optim.lr_scheduler.CyclicLR`, which cycles
266264
the learning rate between two boundaries with a constant frequency.
267265

docs/user/installation.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ If you just want to use skorch, use:
3636
git clone https://github.com/skorch-dev/skorch.git
3737
cd skorch
3838
conda env create
39-
source activate skorch
39+
conda activate skorch
4040
python -m pip install .
4141
4242
If you want to help developing, run:
@@ -46,7 +46,7 @@ If you want to help developing, run:
4646
git clone https://github.com/skorch-dev/skorch.git
4747
cd skorch
4848
conda env create
49-
source activate skorch
49+
conda activate skorch
5050
python -m pip install -e .
5151
5252
py.test # unit tests

0 commit comments

Comments
 (0)