-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnxruntime-directml slower than onnxruntime cpu #175
Comments
Nothing ia wrong. Old Intel GPU is just poor. |
@bleu48 Thx for your reply. Although this Intel GPU is old. its calculation speed should be faster than CPU...especially for the CV model |
@carter54: have you solved this problem yet? I have it too >_< |
Me too. |
Me too... |
I solved mine. My tips are: My inferences are going 2x using DirectML as fast as the CPU. But the sessions are taking about 1-inference time to load. |
System information
To Reproduce
Hi~ I tried to use Onnxruntime with Directml backend to accelerate the model inference. However, it turns out that Directml is slower than CPU...I did see the high usage of GPU during the Directml inference. Can anyone helps me to see whats going wrong here, please?
the model I test can be found here
The text was updated successfully, but these errors were encountered: