You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The "Statement of need" (lines 19-22) mentioned that existing deep learning frameworks can speed up computation, but cannot perform analytical derivatives needed for the trial function method implemented by nnde.
Can this point be clarified or expanded?
For example: PyTorch implements methods for automatic differentiation (autograd) and numerical methods to compute gradients. What is an example where these are insufficient?