Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a workaround for NonlinearOptimizer rejecting all batch steps #388

Merged
merged 7 commits into from
Dec 1, 2022

Conversation

luisenp
Copy link
Contributor

@luisenp luisenp commented Nov 30, 2022

In this version we sometimes have to recompute the error norm if some steps are rejected, so optimizing error vectorization becomes important again. Will fix in a separate PR.

@luisenp luisenp added the bug Something isn't working label Nov 30, 2022
@luisenp luisenp requested a review from fantaosha November 30, 2022 13:33
@luisenp luisenp self-assigned this Nov 30, 2022
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 30, 2022
Copy link
Contributor

@fantaosha fantaosha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please make sure the predicted error for LM and dogleg is also divided by 2.

theseus/optimizer/nonlinear/nonlinear_optimizer.py Outdated Show resolved Hide resolved
theseus/optimizer/nonlinear/nonlinear_optimizer.py Outdated Show resolved Hide resolved
@luisenp luisenp force-pushed the lep.reject_logic_fix branch from f8c4d48 to a8d3dc7 Compare December 1, 2022 03:40
Copy link
Contributor

@fantaosha fantaosha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just one minor comment.

@luisenp luisenp merged commit e2e17f1 into main Dec 1, 2022
@luisenp luisenp deleted the lep.reject_logic_fix branch December 1, 2022 21:13
Copy link
Contributor

@mhmukadam mhmukadam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@@ -107,6 +107,8 @@ def resolve(key: Union[str, "BackwardMode"]) -> "BackwardMode":
# ignoring indices given by `reject_indices`
# 3. Check convergence
class NonlinearOptimizer(Optimizer, abc.ABC):
_MAX_ALL_REJECT_ATTEMPTS = 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any tradeoff or overhead to consider here? Curious about the choice of 3 attempts. Are these all kept in the compute graph?

Also might be helpful to add in a comment that these max attempts are per iteration and not overall.

suddhu pushed a commit to suddhu/theseus that referenced this pull request Jan 21, 2023
…cebookresearch#388)

* Small refactor of nonlinear optimizer _step.

* Included use of adaptive damping in end-to-end test.

* Changed all reject logic to skip the latest iteration.

* Added NonlinearOptimizer._error_metric() method.

* Prevented incorrect combination of ellipsoidal and adaptive damping in LM.

* Fix LM unit test to account for unsupported damping case (ellipsoidal + adaptive).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants