Skip to content
This repository was archived by the owner on Dec 16, 2022. It is now read-only.

Commit f9e2029

Browse files
bryant1410DeNeutoy
authored andcommitted
Update HTTP links to HTTPS where possible (#3142)
* Update HTTP links to HTTPS where possible * Fix line too long
1 parent 9093f47 commit f9e2029

38 files changed

+120
-120
lines changed

LICENSE

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
Apache License
22
Version 2.0, January 2004
3-
http://www.apache.org/licenses/
3+
https://www.apache.org/licenses/
44

55
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
66

@@ -192,7 +192,7 @@
192192
you may not use this file except in compliance with the License.
193193
You may obtain a copy of the License at
194194

195-
http://www.apache.org/licenses/LICENSE-2.0
195+
https://www.apache.org/licenses/LICENSE-2.0
196196

197197
Unless required by applicable law or agreed to in writing, software
198198
distributed under the License is distributed on an "AS IS" BASIS,

MODELS.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ Based on [Dozat and Manning, 2017](https://arxiv.org/pdf/1611.01734.pdf)
9292

9393
* [biaffine-dependency-parser-ptb-2018.08.23.tar.gz](https://allennlp.s3.amazonaws.com/models/biaffine-dependency-parser-ptb-2018.08.23.tar.gz) (69 MB) uses [Penn Treebank](https://catalog.ldc.upenn.edu/ldc99t42) style dependencies.
9494

95-
* [biaffine-dependency-parser-ud-2018.08.23.tar.gz](https://allennlp.s3.amazonaws.com/models/biaffine-dependency-parser-ud-2018.08.23.tar.gz) (61 MB) uses [Universal Dependency](http://universaldependencies.org/) style depedencies.
95+
* [biaffine-dependency-parser-ud-2018.08.23.tar.gz](https://allennlp.s3.amazonaws.com/models/biaffine-dependency-parser-ud-2018.08.23.tar.gz) (61 MB) uses [Universal Dependency](https://universaldependencies.org/) style depedencies.
9696

9797
```
9898
f1: 0.941

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ for developing state-of-the-art deep learning models on a wide variety of lingui
88

99
## Quick Links
1010

11-
* [Website](http://www.allennlp.org/)
11+
* [Website](https://allennlp.org/)
1212
* [Tutorial](https://allennlp.org/tutorials)
1313
* [Documentation](https://allenai.github.io/allennlp-docs/)
1414
* [Contributing Guidelines](CONTRIBUTING.md)
@@ -96,7 +96,7 @@ AllenNLP installs a script when you install the python package, meaning you can
9696
You can now test your installation with `allennlp test-install`.
9797

9898
_`pip` currently installs Pytorch for CUDA 9 only (or no GPU). If you require an older version,
99-
please visit http://pytorch.org/ and install the relevant pytorch binary._
99+
please visit https://pytorch.org/ and install the relevant pytorch binary._
100100

101101
### Installing using Docker
102102

allennlp/commands/elmo.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
layers used to compute ELMo representations to a single (potentially large) file.
66
77
The input file is previously tokenized, whitespace separated text, one sentence per line.
8-
The output is a hdf5 file (<http://docs.h5py.org/en/latest/>) where, with the --all flag, each
8+
The output is a hdf5 file (<https://h5py.readthedocs.io/en/latest/>) where, with the --all flag, each
99
sentence is a size (3, num_tokens, 1024) array with the biLM representations.
1010
1111
For information, see "Deep contextualized word representations", Peters et al 2018.

allennlp/data/dataset_readers/dataset_utils/ontonotes.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ class Ontonotes:
8585
This DatasetReader is designed to read in the English OntoNotes v5.0 data
8686
in the format used by the CoNLL 2011/2012 shared tasks. In order to use this
8787
Reader, you must follow the instructions provided `here (v12 release):
88-
<http://cemantix.org/data/ontonotes.html>`_, which will allow you to download
88+
<https://cemantix.org/data/ontonotes.html>`_, which will allow you to download
8989
the CoNLL style annotations for the OntoNotes v5.0 release -- LDC2013T19.tgz
9090
obtained from LDC.
9191

allennlp/models/semantic_role_labeler.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ def write_to_conll_eval_file(prediction_file: TextIO,
265265
predicate in a sentence to two provided file references.
266266
267267
The CoNLL SRL format is described in
268-
`the shared task data README <http://www.lsi.upc.edu/~srlconll/conll05st-release/README>`_ .
268+
`the shared task data README <https://www.lsi.upc.edu/~srlconll/conll05st-release/README>`_ .
269269
270270
This function expects IOB2-formatted tags, where the B- tag is used in the beginning
271271
of every chunk (i.e. all chunks start with the B- tag).
@@ -309,7 +309,7 @@ def write_bio_formatted_tags_to_file(prediction_file: TextIO,
309309
predicate in a sentence to two provided file references.
310310
311311
The CoNLL SRL format is described in
312-
`the shared task data README <http://www.lsi.upc.edu/~srlconll/conll05st-release/README>`_ .
312+
`the shared task data README <https://www.lsi.upc.edu/~srlconll/conll05st-release/README>`_ .
313313
314314
This function expects IOB2-formatted tags, where the B- tag is used in the beginning
315315
of every chunk (i.e. all chunks start with the B- tag).
@@ -352,7 +352,7 @@ def write_conll_formatted_tags_to_file(prediction_file: TextIO,
352352
predicate in a sentence to two provided file references.
353353
354354
The CoNLL SRL format is described in
355-
`the shared task data README <http://www.lsi.upc.edu/~srlconll/conll05st-release/README>`_ .
355+
`the shared task data README <https://www.lsi.upc.edu/~srlconll/conll05st-release/README>`_ .
356356
357357
This function expects IOB2-formatted tags, where the B- tag is used in the beginning
358358
of every chunk (i.e. all chunks start with the B- tag).

allennlp/modules/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
Custom PyTorch
3-
`Module <http://pytorch.org/docs/master/nn.html#torch.nn.Module>`_ s
3+
`Module <https://pytorch.org/docs/master/nn.html#torch.nn.Module>`_ s
44
that are used as components in AllenNLP
55
:class:`~allennlp.models.model.Model` s.
66
"""

allennlp/modules/seq2seq_encoders/__init__.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66
77
The available Seq2Seq encoders are
88
9-
* `"gru" <http://pytorch.org/docs/master/nn.html#torch.nn.GRU>`_
10-
* `"lstm" <http://pytorch.org/docs/master/nn.html#torch.nn.LSTM>`_
11-
* `"rnn" <http://pytorch.org/docs/master/nn.html#torch.nn.RNN>`_
9+
* `"gru" <https://pytorch.org/docs/master/nn.html#torch.nn.GRU>`_
10+
* `"lstm" <https://pytorch.org/docs/master/nn.html#torch.nn.LSTM>`_
11+
* `"rnn" <https://pytorch.org/docs/master/nn.html#torch.nn.RNN>`_
1212
* :class:`"augmented_lstm" <allennlp.modules.augmented_lstm.AugmentedLstm>`
1313
* :class:`"alternating_lstm" <allennlp.modules.stacked_alternating_lstm.StackedAlternatingLstm>`
1414
* :class:`"alternating_highway_lstm" <allennlp.modules.stacked_alternating_lstm.StackedAlternatingLstm> (GPU only)`

allennlp/modules/seq2seq_encoders/bidirectional_language_model_transformer.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
The BidirectionalTransformerEncoder from Calypso.
3-
This is basically the transformer from http://nlp.seas.harvard.edu/2018/04/03/attention.html
3+
This is basically the transformer from https://nlp.seas.harvard.edu/2018/04/03/attention.html
44
so credit to them.
55
66
This code should be considered "private" in that we have several

allennlp/modules/seq2vec_encoders/__init__.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66
77
The available Seq2Vec encoders are
88
9-
* `"gru" <http://pytorch.org/docs/master/nn.html#torch.nn.GRU>`_
10-
* `"lstm" <http://pytorch.org/docs/master/nn.html#torch.nn.LSTM>`_
11-
* `"rnn" <http://pytorch.org/docs/master/nn.html#torch.nn.RNN>`_
9+
* `"gru" <https://pytorch.org/docs/master/nn.html#torch.nn.GRU>`_
10+
* `"lstm" <https://pytorch.org/docs/master/nn.html#torch.nn.LSTM>`_
11+
* `"rnn" <https://pytorch.org/docs/master/nn.html#torch.nn.RNN>`_
1212
* :class:`"cnn" <allennlp.modules.seq2vec_encoders.cnn_encoder.CnnEncoder>`
1313
* :class:`"augmented_lstm" <allennlp.modules.augmented_lstm.AugmentedLstm>`
1414
* :class:`"alternating_lstm" <allennlp.modules.stacked_alternating_lstm.StackedAlternatingLstm>`

allennlp/modules/token_embedders/embedding.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,7 @@ def from_params(cls, vocab: Vocabulary, params: Params) -> 'Embedding': # type:
272272
273273
where ``archive_uri`` can be a file system path or a URL. For example::
274274
275-
"(http://nlp.stanford.edu/data/glove.twitter.27B.zip)#glove.twitter.27B.200d.txt"
275+
"(https://nlp.stanford.edu/data/glove.twitter.27B.zip)#glove.twitter.27B.200d.txt"
276276
"""
277277
# pylint: disable=arguments-differ
278278
num_embeddings = params.pop_int('num_embeddings', None)

allennlp/nn/activations.py

+15-15
Original file line numberDiff line numberDiff line change
@@ -2,26 +2,26 @@
22
An :class:`Activation` is just a function
33
that takes some parameters and returns an element-wise activation function.
44
For the most part we just use
5-
`PyTorch activations <http://pytorch.org/docs/master/nn.html#non-linear-activations>`_.
5+
`PyTorch activations <https://pytorch.org/docs/master/nn.html#non-linear-activations>`_.
66
Here we provide a thin wrapper to allow registering them and instantiating them ``from_params``.
77
88
The available activation functions are
99
1010
* "linear"
11-
* `"relu" <http://pytorch.org/docs/master/nn.html#torch.nn.ReLU>`_
12-
* `"relu6" <http://pytorch.org/docs/master/nn.html#torch.nn.ReLU6>`_
13-
* `"elu" <http://pytorch.org/docs/master/nn.html#torch.nn.ELU>`_
14-
* `"prelu" <http://pytorch.org/docs/master/nn.html#torch.nn.PReLU>`_
15-
* `"leaky_relu" <http://pytorch.org/docs/master/nn.html#torch.nn.LeakyReLU>`_
16-
* `"threshold" <http://pytorch.org/docs/master/nn.html#torch.nn.Threshold>`_
17-
* `"hardtanh" <http://pytorch.org/docs/master/nn.html#torch.nn.Hardtanh>`_
18-
* `"sigmoid" <http://pytorch.org/docs/master/nn.html#torch.nn.Sigmoid>`_
19-
* `"tanh" <http://pytorch.org/docs/master/nn.html#torch.nn.Tanh>`_
20-
* `"log_sigmoid" <http://pytorch.org/docs/master/nn.html#torch.nn.LogSigmoid>`_
21-
* `"softplus" <http://pytorch.org/docs/master/nn.html#torch.nn.Softplus>`_
22-
* `"softshrink" <http://pytorch.org/docs/master/nn.html#torch.nn.Softshrink>`_
23-
* `"softsign" <http://pytorch.org/docs/master/nn.html#torch.nn.Softsign>`_
24-
* `"tanhshrink" <http://pytorch.org/docs/master/nn.html#torch.nn.Tanhshrink>`_
11+
* `"relu" <https://pytorch.org/docs/master/nn.html#torch.nn.ReLU>`_
12+
* `"relu6" <https://pytorch.org/docs/master/nn.html#torch.nn.ReLU6>`_
13+
* `"elu" <https://pytorch.org/docs/master/nn.html#torch.nn.ELU>`_
14+
* `"prelu" <https://pytorch.org/docs/master/nn.html#torch.nn.PReLU>`_
15+
* `"leaky_relu" <https://pytorch.org/docs/master/nn.html#torch.nn.LeakyReLU>`_
16+
* `"threshold" <https://pytorch.org/docs/master/nn.html#torch.nn.Threshold>`_
17+
* `"hardtanh" <https://pytorch.org/docs/master/nn.html#torch.nn.Hardtanh>`_
18+
* `"sigmoid" <https://pytorch.org/docs/master/nn.html#torch.nn.Sigmoid>`_
19+
* `"tanh" <https://pytorch.org/docs/master/nn.html#torch.nn.Tanh>`_
20+
* `"log_sigmoid" <https://pytorch.org/docs/master/nn.html#torch.nn.LogSigmoid>`_
21+
* `"softplus" <https://pytorch.org/docs/master/nn.html#torch.nn.Softplus>`_
22+
* `"softshrink" <https://pytorch.org/docs/master/nn.html#torch.nn.Softshrink>`_
23+
* `"softsign" <https://pytorch.org/docs/master/nn.html#torch.nn.Softsign>`_
24+
* `"tanhshrink" <https://pytorch.org/docs/master/nn.html#torch.nn.Tanhshrink>`_
2525
"""
2626

2727
import torch

allennlp/nn/beam_search.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ class BeamSearch:
2828
If not given, this just defaults to ``beam_size``. Setting this parameter
2929
to a number smaller than ``beam_size`` may give better results, as it can introduce
3030
more diversity into the search. See `Beam Search Strategies for Neural Machine Translation.
31-
Freitag and Al-Onaizan, 2017 <http://arxiv.org/abs/1702.01806>`_.
31+
Freitag and Al-Onaizan, 2017 <https://arxiv.org/abs/1702.01806>`_.
3232
"""
3333

3434
def __init__(self,

allennlp/nn/initializers.py

+12-11
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,18 @@
77
88
The available initialization functions are
99
10-
* `"normal" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.normal_>`_
11-
* `"uniform" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.uniform_>`_
12-
* `"constant" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.constant_>`_
13-
* `"eye" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.eye_>`_
14-
* `"dirac" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.dirac_>`_
15-
* `"xavier_uniform" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.xavier_uniform_>`_
16-
* `"xavier_normal" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.xavier_normal_>`_
17-
* `"kaiming_uniform" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.kaiming_uniform_>`_
18-
* `"kaiming_normal" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.kaiming_normal_>`_
19-
* `"orthogonal" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.orthogonal_>`_
20-
* `"sparse" <http://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.sparse_>`_
10+
* `"normal" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.normal_>`_
11+
* `"uniform" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.uniform_>`_
12+
* `"constant" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.constant_>`_
13+
* `"eye" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.eye_>`_
14+
* `"dirac" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.dirac_>`_
15+
* `"xavier_uniform" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.xavier_uniform_>`_
16+
* `"xavier_normal" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.xavier_normal_>`_
17+
* `"kaiming_uniform"
18+
<https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.kaiming_uniform_>`_
19+
* `"kaiming_normal" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.kaiming_normal_>`_
20+
* `"orthogonal" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.orthogonal_>`_
21+
* `"sparse" <https://pytorch.org/docs/master/nn.html?highlight=orthogonal#torch.nn.init.sparse_>`_
2122
* :func:`"block_orthogonal" <block_orthogonal>`
2223
* :func:`"uniform_unit_scaling" <uniform_unit_scaling>`
2324
* :class:`"pretrained" <PretrainedModelInitializer>`

allennlp/service/server_simple.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
A `Flask <http://flask.pocoo.org/>`_ server for serving predictions
2+
A `Flask <https://palletsprojects.com/p/flask/>`_ server for serving predictions
33
from a single AllenNLP model. It also includes a very, very bare-bones
44
web front-end for exploring predictions (or you can provide your own).
55

allennlp/tests/data/dataset_readers/universal_dependencies_dataset_reader_test.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ def test_read_from_file(self):
4848
assert fields["head_indices"].labels == [4, 4, 4, 0, 6, 4, 6, 6, 4]
4949

5050
# This instance tests specifically for filtering of elipsis:
51-
# http://universaldependencies.org/u/overview/specific-syntax.html#ellipsis
51+
# https://universaldependencies.org/u/overview/specific-syntax.html#ellipsis
5252
# The original sentence is:
5353
# "Over 300 Iraqis are reported dead and 500 [reported] wounded in Fallujah alone."
5454
# But the second "reported" is elided, and as such isn't included in the syntax tree.

allennlp/tests/training/callback_trainer_test.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ def test_trainer_can_run(self):
207207

208208
@responses.activate
209209
def test_trainer_posts_to_url(self):
210-
url = 'http://slack.com?webhook=ewifjweoiwjef'
210+
url = 'https://slack.com?webhook=ewifjweoiwjef'
211211
responses.add(responses.POST, url)
212212
post_to_url = PostToUrl(url, message="only a test")
213213
callbacks = self.default_callbacks() + [post_to_url]

allennlp/tools/EVALB/LICENSE

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,4 +21,4 @@ OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
2121
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
2222
OTHER DEALINGS IN THE SOFTWARE.
2323

24-
For more information, please refer to <http://unlicense.org/>
24+
For more information, please refer to <https://unlicense.org/>

allennlp/tools/EVALB/evalb.dSYM/Contents/Info.plist

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<?xml version="1.0" encoding="UTF-8"?>
2-
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
2+
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd">
33
<plist version="1.0">
44
<dict>
55
<key>CFBundleDevelopmentRegion</key>

allennlp/training/learning_rate_schedulers/__init__.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
"""
22
AllenNLP uses most
3-
`PyTorch learning rate schedulers <http://pytorch.org/docs/master/optim.html#how-to-adjust-learning-rate>`_,
3+
`PyTorch learning rate schedulers <https://pytorch.org/docs/master/optim.html#how-to-adjust-learning-rate>`_,
44
with a thin wrapper to allow registering them and instantiating them ``from_params``.
55
66
The available learning rate schedulers from PyTorch are
77
8-
* `"step" <http://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.StepLR>`_
9-
* `"multi_step" <http://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.MultiStepLR>`_
10-
* `"exponential" <http://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.ExponentialLR>`_
11-
* `"reduce_on_plateau" <http://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau>`_
8+
* `"step" <https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.StepLR>`_
9+
* `"multi_step" <https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.MultiStepLR>`_
10+
* `"exponential" <https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.ExponentialLR>`_
11+
* `"reduce_on_plateau" <https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau>`_
1212
1313
In addition, AllenNLP also provides `cosine with restarts <https://arxiv.org/abs/1608.03983>`_,
1414
a Noam schedule, and a slanted triangular schedule, which are registered as

allennlp/training/metrics/conll_coref_scores.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ def muc(clusters, mention_to_gold):
189189
"""
190190
Counts the mentions in each predicted cluster which need to be re-allocated in
191191
order for each predicted cluster to be contained by the respective gold cluster.
192-
<http://aclweb.org/anthology/M/M95/M95-1005.pdf>
192+
<https://aclweb.org/anthology/M/M95/M95-1005.pdf>
193193
"""
194194
true_p, all_p = 0, 0
195195
for cluster in clusters:

allennlp/training/metrics/evalb_bracketing_scorer.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ class EvalbBracketingScorer(Metric):
2121
"""
2222
This class uses the external EVALB software for computing a broad range of metrics
2323
on parse trees. Here, we use it to compute the Precision, Recall and F1 metrics.
24-
You can download the source for EVALB from here: <http://nlp.cs.nyu.edu/evalb/>.
24+
You can download the source for EVALB from here: <https://nlp.cs.nyu.edu/evalb/>.
2525
2626
Note that this software is 20 years old. In order to compile it on modern hardware,
2727
you may need to remove an ``include <malloc.h>`` statement in ``evalb.c`` before it

allennlp/training/optimizers.py

+10-10
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
"""
22
AllenNLP just uses
3-
`PyTorch optimizers <http://pytorch.org/docs/master/optim.html>`_ ,
3+
`PyTorch optimizers <https://pytorch.org/docs/master/optim.html>`_ ,
44
with a thin wrapper to allow registering them and instantiating them ``from_params``.
55
66
The available optimizers are
77
8-
* `"adadelta" <http://pytorch.org/docs/master/optim.html#torch.optim.Adadelta>`_
9-
* `"adagrad" <http://pytorch.org/docs/master/optim.html#torch.optim.Adagrad>`_
10-
* `"adam" <http://pytorch.org/docs/master/optim.html#torch.optim.Adam>`_
11-
* `"sparse_adam" <http://pytorch.org/docs/master/optim.html#torch.optim.SparseAdam>`_
12-
* `"sgd" <http://pytorch.org/docs/master/optim.html#torch.optim.SGD>`_
13-
* `"rmsprop <http://pytorch.org/docs/master/optim.html#torch.optim.RMSprop>`_
14-
* `"adamax <http://pytorch.org/docs/master/optim.html#torch.optim.Adamax>`_
15-
* `"averaged_sgd <http://pytorch.org/docs/master/optim.html#torch.optim.ASGD>`_
8+
* `"adadelta" <https://pytorch.org/docs/master/optim.html#torch.optim.Adadelta>`_
9+
* `"adagrad" <https://pytorch.org/docs/master/optim.html#torch.optim.Adagrad>`_
10+
* `"adam" <https://pytorch.org/docs/master/optim.html#torch.optim.Adam>`_
11+
* `"sparse_adam" <https://pytorch.org/docs/master/optim.html#torch.optim.SparseAdam>`_
12+
* `"sgd" <https://pytorch.org/docs/master/optim.html#torch.optim.SGD>`_
13+
* `"rmsprop <https://pytorch.org/docs/master/optim.html#torch.optim.RMSprop>`_
14+
* `"adamax <https://pytorch.org/docs/master/optim.html#torch.optim.Adamax>`_
15+
* `"averaged_sgd <https://pytorch.org/docs/master/optim.html#torch.optim.ASGD>`_
1616
"""
1717

1818
import logging
@@ -52,7 +52,7 @@ def from_params(cls, model_parameters: List, params: Params): # type: ignore
5252
# e.g., {'params': [list of parameters], 'lr': 1e-3, ...}
5353
# Any config option not specified in the additional options (e.g.
5454
# for the default group) is inherited from the top level config.
55-
# see: http://pytorch.org/docs/0.3.0/optim.html?#per-parameter-options
55+
# see: https://pytorch.org/docs/0.3.0/optim.html?#per-parameter-options
5656
#
5757
# groups contains something like:
5858
#"parameter_groups": [

0 commit comments

Comments
 (0)