Skip to content
This repository was archived by the owner on Dec 16, 2022. It is now read-only.

improve err msg for PolynomialDecay LR scheduler #5143

Merged
merged 4 commits into from
Apr 27, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## Unreleased

### Fixed

- Improved the error message for the `PolynomialDelay` LR scheduler when `num_steps_per_epoch` is missing.


## [v2.3.1](https://github.com/allenai/allennlp/releases/tag/v2.3.1) - 2021-04-20

Expand Down
13 changes: 13 additions & 0 deletions allennlp/training/learning_rate_schedulers/polynomial_decay.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from overrides import overrides
import torch

from allennlp.common.checks import ConfigurationError
from allennlp.training.learning_rate_schedulers.learning_rate_scheduler import LearningRateScheduler


Expand Down Expand Up @@ -41,6 +42,18 @@ def __init__(
):
super().__init__(optimizer, last_epoch)

# Sanity check here.
if num_steps_per_epoch is None:
raise ConfigurationError(
"'num_steps_per_epoch' is required for this LR scheduler.\n\n"
"If you know how many batches per epoch for your training data, you can set this value "
"directly in your config. Otherwise you'll need to use compatible settings with your data loader "
"so that it can report an accurate number of batches per epoch. "
"If you're using the MultiProcessDataLoader, "
"this means you either need to set 'batches_per_epoch' "
"or leave 'max_instances_in_memory' as None (if your entire dataset can fit into memory)."
)

self.power = power
self.warmup_steps = warmup_steps
self.total_steps = num_epochs * num_steps_per_epoch
Expand Down