-
Notifications
You must be signed in to change notification settings - Fork 33
Dice loss backward function is wrong? #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@ljpadam |
the dice loss may be wrong, I think the statement should be change to: grad_input = torch.cat((torch.mul(dDice, grad_output[0]),torch.mul(dDice, -grad_output[0])), dim=1) ? |
@ljpadam @anewlearner @JDwangmo Did you guys find a proper implementation that works fine? |
Found the right implementation. Just use these two lines in backward function |
@hamidfarhid Thanks, it works |
@Victor-2015 @hfarhidzadeh Could you show your dice loss code? The above code can't run because I meet another problem. RuntimeError: arguments are located on different GPUs at /pytorch/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:269` do you know how to solve it? |
I find the solution of my problem. |
@hfarhidzadeh Hi, why we need to cat the two part? Why not just return dDice * grad_output ? |
I think the way they implemented, in the end we have one matrix with two columns, not a vector. It is on my top head and don't remember the details. :) |
Thanks. 👍 |
Hi,
In the backward function of dice loss class, is this statement wrong?
grad_input = torch.cat((torch.mul(dDice, -grad_output[0]),
torch.mul(dDice, grad_output[0])), 0)
I think it should expand the dDice from one dimension to two dimension, and then concat them in the second dimension.
The text was updated successfully, but these errors were encountered: