Skip to content
This repository was archived by the owner on Dec 16, 2022. It is now read-only.

Commit 4de68a4

Browse files
authored
Improves API docs and docstring consistency (#4244)
* refactor py2md * improve py2md, warn if backticks missing * ensure backticks consistent * remove docstring help test * fixes and handle more edge cases * add failing test for pydoc-markdown bug * update pydoc-markdown * fix some links
1 parent 1b0d231 commit 4de68a4

File tree

112 files changed

+1072
-817
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

112 files changed

+1072
-817
lines changed

Makefile

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ MD_DOCS_ROOT = docs/
44
MD_DOCS_API_ROOT = $(MD_DOCS_ROOT)api/
55
MD_DOCS_SRC = $(filter-out $(SRC)/__main__.py %/__init__.py $(SRC)/version.py,$(shell find $(SRC) -type f -name '*.py' | grep -v -E 'tests/'))
66
MD_DOCS = $(subst .py,.md,$(subst $(SRC)/,$(MD_DOCS_API_ROOT),$(MD_DOCS_SRC)))
7-
MD_DOCS_CMD = python scripts/py2md.py
7+
MD_DOCS_CMD = python allennlp/tools/py2md.py
88
MD_DOCS_CONF = mkdocs.yml
99
MD_DOCS_CONF_SRC = mkdocs-skeleton.yml
1010
MD_DOCS_TGT = site/
@@ -118,7 +118,7 @@ $(MD_DOCS_ROOT)%.md : %.md
118118
$(MD_DOCS_CONF) : $(MD_DOCS_CONF_SRC) $(MD_DOCS)
119119
python scripts/build_docs_config.py $@ $(MD_DOCS_CONF_SRC) $(MD_DOCS_ROOT) $(MD_DOCS_API_ROOT)
120120

121-
$(MD_DOCS_API_ROOT)%.md : $(SRC)/%.py scripts/py2md.py
121+
$(MD_DOCS_API_ROOT)%.md : $(SRC)/%.py allennlp/tools/py2md.py
122122
mkdir -p $(shell dirname $@)
123123
$(MD_DOCS_CMD) $(subst /,.,$(subst .py,,$<)) --out $@
124124

allennlp/commands/evaluate.py

+1-47
Original file line numberDiff line numberDiff line change
@@ -2,54 +2,8 @@
22
The `evaluate` subcommand can be used to
33
evaluate a trained model against a dataset
44
and report any metrics calculated by the model.
5-
6-
$ allennlp evaluate --help
7-
usage: allennlp evaluate [-h] [--output-file OUTPUT_FILE]
8-
[--weights-file WEIGHTS_FILE]
9-
[--cuda-device CUDA_DEVICE] [-o OVERRIDES]
10-
[--batch-size BATCH_SIZE]
11-
[--batch-weight-key BATCH_WEIGHT_KEY]
12-
[--extend-vocab]
13-
[--embedding-sources-mapping EMBEDDING_SOURCES_MAPPING]
14-
[--include-package INCLUDE_PACKAGE]
15-
archive_file input_file
16-
17-
Evaluate the specified model + dataset
18-
19-
positional arguments:
20-
archive_file path to an archived trained model
21-
input_file path to the file containing the evaluation data
22-
23-
optional arguments:
24-
-h, --help show this help message and exit
25-
--output-file OUTPUT_FILE
26-
path to output file
27-
--weights-file WEIGHTS_FILE
28-
a path that overrides which weights file to use
29-
--cuda-device CUDA_DEVICE
30-
id of GPU to use (if any)
31-
-o OVERRIDES, --overrides OVERRIDES
32-
a JSON structure used to override the experiment
33-
configuration
34-
--batch-size BATCH_SIZE
35-
If non-empty, the batch size to use during evaluation.
36-
--batch-weight-key BATCH_WEIGHT_KEY
37-
If non-empty, name of metric used to weight the loss
38-
on a per-batch basis.
39-
--extend-vocab if specified, we will use the instances in your new
40-
dataset to extend your vocabulary. If pretrained-file
41-
was used to initialize embedding layers, you may also
42-
need to pass --embedding-sources-mapping.
43-
--embedding-sources-mapping EMBEDDING_SOURCES_MAPPING
44-
a JSON dict defining mapping from embedding module
45-
path to embedding pretrained-file used during
46-
training. If not passed, and embedding needs to be
47-
extended, we will try to use the original file paths
48-
used during training. If they are not available we
49-
will use random vectors for embedding extension.
50-
--include-package INCLUDE_PACKAGE
51-
additional packages to include
525
"""
6+
537
import argparse
548
import json
559
import logging

allennlp/commands/find_learning_rate.py

+2-39
Original file line numberDiff line numberDiff line change
@@ -2,43 +2,6 @@
22
The `find-lr` subcommand can be used to find a good learning rate for a model.
33
It requires a configuration file and a directory in
44
which to write the results.
5-
6-
$ allennlp find-lr --help
7-
usage: allennlp find-lr [-h] -s SERIALIZATION_DIR [-o OVERRIDES]
8-
[--start-lr START_LR] [--end-lr END_LR]
9-
[--num-batches NUM_BATCHES]
10-
[--stopping-factor STOPPING_FACTOR] [--linear] [-f]
11-
[--include-package INCLUDE_PACKAGE]
12-
param_path
13-
14-
Find a learning rate range where loss decreases quickly for the specified
15-
model and dataset.
16-
17-
positional arguments:
18-
param_path path to parameter file describing the model to be
19-
trained
20-
21-
optional arguments:
22-
-h, --help show this help message and exit
23-
-s SERIALIZATION_DIR, --serialization-dir SERIALIZATION_DIR
24-
The directory in which to save results.
25-
-o OVERRIDES, --overrides OVERRIDES
26-
a JSON structure used to override the experiment
27-
configuration
28-
--start-lr START_LR learning rate to start the search (default = 1e-05)
29-
--end-lr END_LR learning rate up to which search is done (default =
30-
10)
31-
--num-batches NUM_BATCHES
32-
number of mini-batches to run learning rate finder
33-
(default = 100)
34-
--stopping-factor STOPPING_FACTOR
35-
stop the search when the current loss exceeds the best
36-
loss recorded by multiple of stopping factor
37-
--linear increase learning rate linearly instead of exponential
38-
increase
39-
-f, --force overwrite the output directory if it exists
40-
--include-package INCLUDE_PACKAGE
41-
additional packages to include
425
"""
436

447
import argparse
@@ -161,7 +124,7 @@ def find_learning_rate_model(
161124
162125
# Parameters
163126
164-
params : [`Params`](../common/params.md#params)
127+
params : `Params`
165128
A parameter object specifying an AllenNLP Experiment.
166129
serialization_dir : `str`
167130
The directory in which to save results.
@@ -266,7 +229,7 @@ def search_learning_rate(
266229
267230
# Parameters
268231
269-
trainer: [`GradientDescentTrainer`](../training/trainer.md#gradientdescenttrainer)
232+
trainer: `GradientDescentTrainer`
270233
start_lr : `float`
271234
The learning rate to start the search.
272235
end_lr : `float`

allennlp/commands/predict.py

+1-44
Original file line numberDiff line numberDiff line change
@@ -2,51 +2,8 @@
22
The `predict` subcommand allows you to make bulk JSON-to-JSON
33
or dataset to JSON predictions using a trained model and its
44
[`Predictor`](../predictors/predictor.md#predictor) wrapper.
5-
6-
$ allennlp predict --help
7-
usage: allennlp predict [-h] [--output-file OUTPUT_FILE]
8-
[--weights-file WEIGHTS_FILE]
9-
[--batch-size BATCH_SIZE] [--silent]
10-
[--cuda-device CUDA_DEVICE] [--use-dataset-reader]
11-
[--dataset-reader-choice {train,validation}]
12-
[-o OVERRIDES] [--predictor PREDICTOR]
13-
[--include-package INCLUDE_PACKAGE]
14-
archive_file input_file
15-
16-
Run the specified model against a JSON-lines input file.
17-
18-
positional arguments:
19-
archive_file the archived model to make predictions with
20-
input_file path to or url of the input file
21-
22-
optional arguments:
23-
-h, --help show this help message and exit
24-
--output-file OUTPUT_FILE
25-
path to output file
26-
--weights-file WEIGHTS_FILE
27-
a path that overrides which weights file to use
28-
--batch-size BATCH_SIZE
29-
The batch size to use for processing
30-
--silent do not print output to stdout
31-
--cuda-device CUDA_DEVICE
32-
id of GPU to use (if any)
33-
--use-dataset-reader Whether to use the dataset reader of the original
34-
model to load Instances. The validation dataset reader
35-
will be used if it exists, otherwise it will fall back
36-
to the train dataset reader. This behavior can be
37-
overridden with the --dataset-reader-choice flag.
38-
--dataset-reader-choice {train,validation}
39-
Indicates which model dataset reader to use if the
40-
--use-dataset-reader flag is set. (default =
41-
validation)
42-
-o OVERRIDES, --overrides OVERRIDES
43-
a JSON structure used to override the experiment
44-
configuration
45-
--predictor PREDICTOR
46-
optionally specify a specific predictor to use
47-
--include-package INCLUDE_PACKAGE
48-
additional packages to include
495
"""
6+
507
from typing import List, Iterator, Optional
518
import argparse
529
import sys

allennlp/commands/print_results.py

+1-22
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,8 @@
11
"""
22
The `print-results` subcommand allows you to print results from multiple
33
allennlp serialization directories to the console in a helpful csv format.
4-
5-
$ allennlp print-results --help
6-
usage: allennlp print-results [-h] [-k KEYS [KEYS ...]] [-m METRICS_FILENAME]
7-
[--include-package INCLUDE_PACKAGE]
8-
path
9-
10-
Print results from allennlp training runs in a helpful CSV format.
11-
12-
positional arguments:
13-
path Path to recursively search for allennlp serialization
14-
directories.
15-
16-
optional arguments:
17-
-h, --help show this help message and exit
18-
-k KEYS [KEYS ...], --keys KEYS [KEYS ...]
19-
Keys to print from metrics.json.Keys not present in
20-
all metrics.json will result in "N/A"
21-
-m METRICS_FILENAME, --metrics-filename METRICS_FILENAME
22-
Name of the metrics file to inspect. (default =
23-
metrics.json)
24-
--include-package INCLUDE_PACKAGE
25-
additional packages to include
264
"""
5+
276
import argparse
287
import json
298
import logging

allennlp/commands/subcommand.py

+1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
"""
22
Base class for subcommands under `allennlp.run`.
33
"""
4+
45
import argparse
56
from typing import Callable, Dict, Optional, Type, TypeVar
67

allennlp/commands/test_install.py

-14
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,6 @@
11
"""
22
The `test-install` subcommand verifies
33
an installation by running the unit tests.
4-
5-
$ allennlp test-install --help
6-
usage: allennlp test-install [-h] [--run-all] [-k K]
7-
[--include-package INCLUDE_PACKAGE]
8-
9-
Test that installation works by running the unit tests.
10-
11-
optional arguments:
12-
-h, --help show this help message and exit
13-
--run-all By default, we skip tests that are slow or download
14-
large files. This flag will run all tests.
15-
-k K Limit tests by setting pytest -k argument
16-
--include-package INCLUDE_PACKAGE
17-
additional packages to include
184
"""
195

206
import argparse

0 commit comments

Comments
 (0)