Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use IMPLICIT for test_theseus_layer and fix related bugs #431

Merged
merged 3 commits into from
Jan 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion tests/test_theseus_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,16 @@ def cost_weight_fn():
cost_weight_param_name: cost_weight_fn(),
}
pred_vars, info = layer_to_learn.forward(
input_values, optimizer_kwargs={**optimizer_kwargs, **{"verbose": verbose}}
input_values,
optimizer_kwargs={
**optimizer_kwargs,
**{
"verbose": verbose,
"backward_mode": "implicit"
if learning_method == "direct"
else "unroll",
},
},
)
assert not (
(info.status == th.NonlinearOptimizerStatus.START)
Expand Down
4 changes: 4 additions & 0 deletions theseus/optimizer/autograd/lu_cuda_sparse_autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ def forward( # type: ignore
batch_size, A_row_ptr, A_col_ind, A_val_double, AtA_row_ptr, AtA_col_ind
)
if damping_alpha_beta is not None:
damping_alpha_beta = (
damping_alpha_beta[0].double(),
damping_alpha_beta[1].double(),
)
AtA_args = sparse_structure.num_cols, AtA_row_ptr, AtA_col_ind, AtA
apply_damping(batch_size, *AtA_args, *damping_alpha_beta)
solver_context.factor(AtA)
Expand Down
10 changes: 9 additions & 1 deletion theseus/optimizer/nonlinear/nonlinear_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -378,6 +378,7 @@ def _optimize_loop(
info.last_err,
converged_indices,
force_update,
truncated_grad_loop=truncated_grad_loop,
**kwargs,
) # err is shape (batch_size,)
if all_rejected:
Expand Down Expand Up @@ -566,12 +567,19 @@ def _step(
previous_err: torch.Tensor,
converged_indices: torch.Tensor,
force_update: bool,
truncated_grad_loop: bool,
**kwargs,
) -> Tuple[torch.Tensor, bool]:
tensor_dict, err = self._compute_retracted_tensors_and_error(
delta, converged_indices, force_update
)
reject_indices = self._complete_step(delta, err, previous_err, **kwargs)
if truncated_grad_loop:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't remember, is the implicit grad part of the truncated_grad_loop flag?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, truncated_grad_loop is the final attached loop of "TRUNCATED" and "IMPLICIT". I was thinking that this is kind of a confusing name tbh.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we could rename to implicit_and_trunc_grad_block or maybe ..._loop is okay. We can also add a comment to clarify.

# For "implicit" or "truncated", the grad-attached steps are just GN steps
# So, we need to avoid calling `_complete_step`, as it's likely to reject
# the step computed
reject_indices = None
else:
reject_indices = self._complete_step(delta, err, previous_err, **kwargs)

if reject_indices is not None and reject_indices.all():
return previous_err, True
Expand Down