Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More Performant CachingHostAllocator for Pinned Memory Allocation #106606

Open
lausannel opened this issue Aug 4, 2023 · 0 comments
Open

More Performant CachingHostAllocator for Pinned Memory Allocation #106606

lausannel opened this issue Aug 4, 2023 · 0 comments
Labels
module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@lausannel
Copy link

lausannel commented Aug 4, 2023

🚀 The feature, motivation and pitch

I intend to employ a memory allocator for pinned memory allocation and have come across the CachingHostAllocator in PyTorch. Regrettably, the practical memory consumption surpasses what is expected. The allocator follows a power-of-two allocation strategy without memory coalescing, resulting in substantial memory wastage. This inefficiency can lead to increased memory consumption and suboptimal utilization of resources.

The current implementation of the CachingHostAllocator in PyTorch for pinned memory allocation seems to exhibit suboptimal performance under certain conditions. As a crucial component in deep learning workloads, efficient memory allocation and management are essential to ensure optimal training and inference performance.

Alternatives

I suggest the development of a more performant alternative to the existing CachingHostAllocator that addresses the performance concerns. This new allocator should focus on improving memory allocation speed, reducing memory fragmentation, and better leveraging modern hardware characteristics.

Additional context

No response

Tasks

No tasks being tracked yet.

cc @ptrblck

@cpuhrsch cpuhrsch added module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Aug 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants