Description
Environment info
- Platform: ubntu anaconda
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.0 (No GPU)
- Using GPU in script?: No
Information
Model I am using (ListenAttendSpell, Transformer, Conformer ...): I'm using ListenAttendSpell
The problem arises when using:
- the official example scripts: (give details below)
- my own modified scripts: (give details below)
To reproduce
Steps to reproduce the behavior:
- Use tensorboard as logger rather than wandb
in configuration.py, I modified like below.
logger: str = field(
default="tensorboard", ...
)
- execute a script
python ./openspeech_cli/hydra_train.py
dataset=ksponspeech
dataset.dataset_path=$DATASET_PATH
dataset.manifest_file_path=$MANIFEST_FILE_PATH \
dataset.test_dataset_path=$TEST_DATASET_PATH
dataset.test_manifest_dir=$TEST_MANIFEST_DIR
tokenizer=kspon_character
model=listen_attend_spell
audio=melspectrogram
lr_scheduler=warmup_reduce_lr_on_plateau
trainer=cpu
criterion=cross_entropy
- Then, I can see an error
File: "(중략)openspeech/models/openspeech_encoder_decoder_model.py", line 92, in collect_outputs
self.info({
File: "(중략)openspeech/models/openspeech_model.py", line 82, in info
self.log(key, value, prog_bar=True)
File: "(중략)/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 399, in log
apply_to_collection(
File: "(중략)/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 100, in apply_to_collection
return function(data, *args, **kwargs)
File: "(중략)/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 533, in __check_allowed
raise ValueError(f"'self.log({name}, {value})' was called, but '{type(v).name}' values cannot be logged")
ValueError:
self.log(val_cross_entropy_loss, None)' was called, but 'NoneType' values cannot be logged
Checking the code, i think that cross_entropy_loss value is None , which causes the error.
More detail,
file : openspeech/openspeech_encoder_decoder_model.py at main · openspeech-team/openspeech · GitHub
def collect_outputs(
self,
stage: str,
logits: Tensor,
encoder_logits: Tensor,
encoder_output_lengths: Tensor,
targets: Tensor,
target_lengths: Tensor,
) -> OrderedDict:
cross_entropy_loss, ctc_loss = None, None // <------------ cross-entropy_loss 는 None 으로 초기화
.....
elif get_class_name(self.criterion) == "LabelSmoothedCrossEntropyLoss" \
or get_class_name(self.criterion) == "CrossEntropyLoss":
loss = self.criterion(logits, targets[:, 1:]) // <------------ loss 만 value 할당
else:
.....
self.info({
f"{stage}_loss": loss,
f"{stage}_cross_entropy_loss": cross_entropy_loss, // <---------- cross_entropy_loss 는 None
f"{stage}_ctc_loss": ctc_loss,
f"{stage}_wer": wer,
f"{stage}_cer": cer,
})
Expected behavior
No error