You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am looking for the torch-directml equivalent to torch.cuda.memory_stats
I tried looking through this repo, but I couldn't find anything related.
This, if available, will be very helpful for GPU VRAM monitoring.
The text was updated successfully, but these errors were encountered:
Found the function gpu_memory and torch_directml_native.get_gpu_memory, but both return an array of 0.0. The length of the array seems to correspond to 0.5*VRAM size of the GPU in GB. Is this the intended interpretation? Is a more straightforward way to get VRAM details available?
gpu_memory() does not have much information. It expands by VRAM occupation (tensor creation). Its length is not fixed and just tells us how much VRAM it occupies. What we need is the available VRAM size of torch.
I am looking for the torch-directml equivalent to torch.cuda.memory_stats
I tried looking through this repo, but I couldn't find anything related.
This, if available, will be very helpful for GPU VRAM monitoring.
The text was updated successfully, but these errors were encountered: