-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for masking jacobians of zero weights in the batch #398
Conversation
75d7d0b
to
2ebe001
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. But I have some concerns about whether it is reasonable to ignore small non-zero cost weights.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
dc79c89
to
605201c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
* Added CostWeight.is_zero() method. * Add a masked_variable context for temporarily mask variables' tensors. * Add logic to skip jacobians computation for zero weights in batch. * Enable masked jacobians in vectorization. * Detached zero mask computation and using smaller EPS. * Added is_zero for GPCostWeight. * Changed scale and diagonal weight is_zero to use == 0.
With this feature, the vectorizer is able to compute errors and jacobians only over batch indices that are known to be greater than zero. Preliminary tests on a simple problem indicate savings of ~18% compute time both in CPU and GPU.
For now I made the feature only applicable inside vectorization, although it might be safe to turn off in general.