Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory_stats with torch-directml #444

Open
bvhari opened this issue Apr 28, 2023 · 2 comments
Open

memory_stats with torch-directml #444

bvhari opened this issue Apr 28, 2023 · 2 comments
Labels
pytorch-directml Issues in PyTorch when using its DirectML backend

Comments

@bvhari
Copy link

bvhari commented Apr 28, 2023

I am looking for the torch-directml equivalent to torch.cuda.memory_stats
I tried looking through this repo, but I couldn't find anything related.
This, if available, will be very helpful for GPU VRAM monitoring.

@zhangxiang1993 zhangxiang1993 added the pytorch-directml Issues in PyTorch when using its DirectML backend label Apr 30, 2023
@bvhari
Copy link
Author

bvhari commented May 8, 2023

Found the function gpu_memory and torch_directml_native.get_gpu_memory, but both return an array of 0.0. The length of the array seems to correspond to 0.5*VRAM size of the GPU in GB. Is this the intended interpretation? Is a more straightforward way to get VRAM details available?

@lshqqytiger
Copy link

lshqqytiger commented May 10, 2023

gpu_memory() does not have much information. It expands by VRAM occupation (tensor creation). Its length is not fixed and just tells us how much VRAM it occupies. What we need is the available VRAM size of torch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pytorch-directml Issues in PyTorch when using its DirectML backend
Projects
None yet
Development

No branches or pull requests

3 participants