Open
Description
🐛 Bug
I noticed that when I compute the dice loss on an entire batch the loss is smaller than computing it singularly for each sample and then averaging it. Is this behavior intended?
Expected behavior
Dice loss on batch equivalent to average of dice losses
Environment
Using loss from segmentation_models_pytorch
Metadata
Metadata
Assignees
Labels
No labels