Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small vectorization improvements #336

Merged
merged 3 commits into from
Oct 28, 2022
Merged

Conversation

luisenp
Copy link
Contributor

@luisenp luisenp commented Oct 22, 2022

This PR commits two changes:

  • Don't expand tensors in vectorization if the objective's batch size = 1
  • Retract all variables now updates vectorization immediately, which avoids some cases were vectorization had to be run twice with the same data (because of torch.no_grad() uses).

TODO: check if this changes times in the PGO benchmark. In a small BA script it looks like this leads to some improvements in running time (~10-20%).

@luisenp luisenp added the enhancement New feature or request label Oct 22, 2022
@luisenp luisenp self-assigned this Oct 22, 2022
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 22, 2022
@luisenp luisenp force-pushed the lep.small_vectorization_improv branch from efa75fa to cd7d722 Compare October 28, 2022 16:11
@luisenp luisenp merged commit 2a05c10 into main Oct 28, 2022
@luisenp luisenp deleted the lep.small_vectorization_improv branch October 28, 2022 16:26
suddhu pushed a commit to suddhu/theseus that referenced this pull request Jan 21, 2023
* Vectorization doesn't expand if objective bs = 1.

* Retract all variables now updates vectorization.

* Fix typo.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants