-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.lstm raised an error with backend #613
Comments
Hello, excuse me, I had the same problem, did you solve it? |
No, I had to train my model on another device with Nvidia graphics, I guess it is the compatibility issue and it needs to be solved by the team |
I encountered the same issue. To add more details: |
Based on my tests, the LSTM module works correctly in versions 230426 and 240521. However, the latest two versions, 240614 and 240715, have issues with the LSTM module.@Mithzyl @yigedabuliu |
amd Vega 64 and R9 Nano
NotImplementedError: Could not run 'aten::_thnn_fused_lstm_cell' with arguments from the 'CPU' backend. |
Same issue, it needs to be fixed |
I made a custom model with lstm layers, and found it may be possible to use directml, then encountered the error:
NotImplementedError: Could not run 'aten::_thnn_fused_lstm_cell' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_thnn_fused_lstm_cell' is only available for these backends: [PrivateUse1, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
PrivateUse1: registered at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:58 [backend fallback]
It seems like the dml device 'privateuseone' does not match with predefined backend for lstm 'PrivateUse1', very tricky
Reproduce:
simply define a lstm network then do inference the error will pop out
The text was updated successfully, but these errors were encountered: