Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use IMPLICIT for test_theseus_layer and fix related bugs #431

Merged
merged 3 commits into from
Jan 18, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Fixed bug that was causing broken graph for implicit diff and trust r…
…egion methods.
  • Loading branch information
luisenp committed Jan 17, 2023
commit f77f50fb02062e0eeceebb798b85413896c86f7e
10 changes: 9 additions & 1 deletion theseus/optimizer/nonlinear/nonlinear_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -378,6 +378,7 @@ def _optimize_loop(
info.last_err,
converged_indices,
force_update,
truncated_grad_loop=truncated_grad_loop,
**kwargs,
) # err is shape (batch_size,)
if all_rejected:
Expand Down Expand Up @@ -566,12 +567,19 @@ def _step(
previous_err: torch.Tensor,
converged_indices: torch.Tensor,
force_update: bool,
truncated_grad_loop: bool,
**kwargs,
) -> Tuple[torch.Tensor, bool]:
tensor_dict, err = self._compute_retracted_tensors_and_error(
delta, converged_indices, force_update
)
reject_indices = self._complete_step(delta, err, previous_err, **kwargs)
if truncated_grad_loop:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't remember, is the implicit grad part of the truncated_grad_loop flag?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, truncated_grad_loop is the final attached loop of "TRUNCATED" and "IMPLICIT". I was thinking that this is kind of a confusing name tbh.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we could rename to implicit_and_trunc_grad_block or maybe ..._loop is okay. We can also add a comment to clarify.

# For "implicit" or "truncated", the grad-attached steps are just GN steps
# So, we need to avoid calling `_complete_step`, as it's likely to reject
# the step computed
reject_indices = None
else:
reject_indices = self._complete_step(delta, err, previous_err, **kwargs)

if reject_indices is not None and reject_indices.all():
return previous_err, True
Expand Down