Skip to content

Support grad backprop when add/sub use broadcast #87

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

nogginly
Copy link
Contributor

@christopherzimmerman, this fixes grad backpropagation for addition/subtraction when broadcast occurs because one operand has a different rank than the other.

  • Modified Add/Sub gates to sub-class the TwoOpGate
  • Made TwoOpGate an abstract class with #backward abstract method (since every subclass has to implement this method)
  • Modified operator +/- definition to pass in the operands
  • Modified add/sub backward methods to accept operands
  • Add a private convenience method to backprop the add/sub gradient
    • I originally had a faster case for handling scalar add/sum, but had to remove it until a future time when Tensor#sum has OCL implementation. I'm not sure
    • Let me know if there's a faster way to implement the rank-by-rank sum that I'm using iteratively.

I've included tests, which are based on results I get from PyTorch.

@christopherzimmerman
Copy link
Member

@nogginly thanks for the PR, going to need a bit longer to review this since when I implemented this I guess I made some different design decisions around broadcasted operations than other libraries.

I always set the gradient to match the value that was actually used in the calculation, so in this case, since the broadcast happens during the operation, the gradient will match that shape. Are you saying that Pytorch aggregates that back down to match the dimensions of the initial variable, before the operation?

@christopherzimmerman
Copy link
Member

Also, for rank by rank some, there is a "view_along_axis" iterator that's in some version of this code that gives a view into multiple axes that can probably be used to reduce multiple rank sums, I'll look for it.

@nogginly
Copy link
Contributor Author

I always set the gradient to match the value that was actually used in the calculation, so in this case, since the broadcast happens during the operation, the gradient will match that shape. Are you saying that Pytorch aggregates that back down to match the dimensions of the initial variable, before the operation?

Hola @christopherzimmerman. Yes, that is correct, I wrote a simple test using PyTorch and got exactly that. I ran into this as I was implementing a two-layer MLP and when I tried to update the the biases using the gradient, the shapes were off and matmul didn't work and that is how I discovered this. The tests I put in are based on those PyTorch test cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants