You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TheseusLayer.to returns None and this the one-liner needs to be split in 2: theseus_optim = th.TheseusLayer(optimizer); theseus_optim.to(torch.device('cuda'))
Using 'cuda' fails as this is mapped to cuda:0 in PyTorch tensors and the comparison then fails in Objective.update
This is a good suggestion, thanks for bringing it up! One question is about the expected behavior when passing torch.device("cuda) . Should we automatically pass to "cuda:0" on our side? As you say, this seems to be the case for torch tensors, but for some reason torch.device() doesn't do this automatically. Not sure if this is an oversight on Pytorch's side or a deliberate choice.
🚀 Feature
It would be great to be able to call
as well as
Currently this fails for two reasons:
TheseusLayer.to
returnsNone
and this the one-liner needs to be split in 2:theseus_optim = th.TheseusLayer(optimizer); theseus_optim.to(torch.device('cuda'))
cuda:0
in PyTorch tensors and the comparison then fails inObjective.update
theseus/theseus/core/objective.py
Lines 775 to 780 in 9a117fd
Motivation
This would make the use of Theseus more convenient.
Pitch
See above
Alternatives
See above
Additional context
theseus/theseus/theseus_layer.py
Lines 137 to 140 in 9a117fd
Other issues with
to
have been discussed in:The text was updated successfully, but these errors were encountered: