Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a differentiable sparse matrix vector product on top of our ops #392

Merged
merged 5 commits into from
Dec 8, 2022

Conversation

luisenp
Copy link
Contributor

@luisenp luisenp commented Dec 1, 2022

Backward pass can be made more efficient in GPU if we write a custom CUDA kernel for it, but this should be reasonable enough for now.

@luisenp luisenp added the enhancement New feature or request label Dec 1, 2022
@luisenp luisenp self-assigned this Dec 1, 2022
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 1, 2022
@luisenp luisenp changed the base branch from main to lep.sparse_solvers_py_float_support December 1, 2022 21:29
@luisenp luisenp force-pushed the lep.sparse_solvers_py_float_support branch from 049cf08 to 14918e1 Compare December 2, 2022 18:44
@luisenp luisenp force-pushed the lep.sparse_matrix_vector_prod branch from 8df7ef4 to 146e0bf Compare December 2, 2022 18:46
@luisenp luisenp force-pushed the lep.sparse_solvers_py_float_support branch from 14918e1 to 7d361a3 Compare December 5, 2022 16:57
@luisenp luisenp force-pushed the lep.sparse_matrix_vector_prod branch from 146e0bf to 1d703b1 Compare December 5, 2022 16:59
Copy link
Contributor

@maurimo maurimo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! happy to know we now have this, it can be used for instance to use iterative solvers (not sure if that was already the plan).

@luisenp luisenp force-pushed the lep.sparse_matrix_vector_prod branch from 4a78a03 to be37500 Compare December 8, 2022 16:59
@luisenp
Copy link
Contributor Author

luisenp commented Dec 8, 2022

Looks great! happy to know we now have this, it can be used for instance to use iterative solvers (not sure if that was already the plan).

I want to add at least Conjugate Gradient at some point, but there is only so many hours in the day :) Will probably do eventually.

Base automatically changed from lep.sparse_solvers_py_float_support to main December 8, 2022 17:25
@luisenp luisenp force-pushed the lep.sparse_matrix_vector_prod branch from be37500 to 1e9dcb9 Compare December 8, 2022 17:26
@luisenp luisenp merged commit 7dca714 into main Dec 8, 2022
@luisenp luisenp deleted the lep.sparse_matrix_vector_prod branch December 8, 2022 18:17
Copy link
Member

@mhmukadam mhmukadam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

suddhu pushed a commit to suddhu/theseus that referenced this pull request Jan 21, 2023
…acebookresearch#392)

* Add autograd function for sparse matrix vector product.

* Add wrapper for sparse_mv in SparseLinearization.

* Added autograd function for sparse matrix transpose vector product.

* Add wrapper for sparse_mtv in SparseLinearization to make differentiable Atb.

* Fix dtype index bug.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants