As per the Arch Wiki:
Linux divides its physical RAM (random access memory) into chunks of memory called pages. Swapping is the process
whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that
page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.
By increasing the swap size, we can do a few things:
- Significantly reduce memory pressure
- This allows more to be cached while simultaneously allowing for VRAM to inflate a bit more
- Have a stash of "emergency memory" in case physical memory runs low
- This prevents bulk evictions and distributes memory management across a longer period of time, preventing latency spikes
sudo swapoff -a
sudo dd if=/dev/zero of=/home/swapfile bs=1G count=SIZE_IN_GB status=none
sudo chmod 0600 /home/swapfile
sudo mkswap /home/swapfile
sudo swapon /home/swapfile
Also from the Arch Wiki:
The swappiness sysctl parameter represents the kernel's preference (or avoidance) of swap space. Swappiness can have a value between 0 and 200 (max 100 if Linux < 5.8), the default value is 60. A low value causes the kernel to avoid swapping, a high value causes the kernel to try to use swap space, and a value of 100 means IO cost is assumed to be equal. Using a low value on sufficient memory is known to improve responsiveness on many systems.
By default, the Deck has a very high swappiness of 100, which can lead to data going to swap when there's a lot of physical memory left.
This can can be bad for two reasons:
- Excessive writes can shorten the life of your drive
- Swap is much slower than memory, and using it slows things down
So, by reducing swap to a lower value, or my recommended value of 1, we can:
- Ensure that swap is only used at the very last second, when it's really needed
- Preserve drive health
echo VALUE | sudo tee /proc/sys/vm/swappiness
From an excellent writeup by Emin here:
When the CPU assigns memory to processes that require it, it typically does so in 4 KB page chunks. Because the CPU’s MMU unit actively needs to translate virtual memory to physical memory upon incoming I/O requests, going through all 4 KB pages is naturally an expensive operation. Fortunately, it has its own TLB cache (translation lookaside buffer), which reduces the potential amount of time required to access a specific memory address by caching the most recently used memory.
As mentioned in the explanation, pages are expensive to allocate. Hugepages are significantly easier to allocate and look up, and they reduce a lot of stutter when dealing with large amounts of memory.
echo always | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
As per the kernel docs:
The mount is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or MAP_ANONYMOUS), GPU drivers’ DRM objects, Ashmem.
Essentially, it allows those things to end up in hugepages.
For the same reasons as enabling hugepages, this can reduce some latency in memory management.
echo advise | sudo tee /sys/kernel/mm/transparent_hugepage/shmem_enabled
This feature proactively defragments memory when Linux detects "downtime".
Even the kernel documentation agrees that this feature has a system-wide impact on performance:
Note that compaction has a non-trivial system-wide impact as pages belonging to different processes are moved around, which could also lead to latency spikes in unsuspecting applications.
Essentially, even though Linux tried to detect the proper time to do compaction, there's never a good time during gaming, so it's best to disable it.
echo 0 | sudo tee /proc/sys/vm/compaction_proactiveness
It's the same thing as proactive compaction, but for hugepages.
See the reasons for disabling proactive compaction.
echo 0 | sudo tee /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
PLU configures how many times a process can try to get a lock on a page before "fair" behavior kicks in, and guarantees that process access to a page. See the commit for details.
Unfortunately, it can have negative side effects, especially in gaming. Having processes wait repeatedly can cause games to have many issues with stutter, and causes some to sleep when they shouldn't.
echo 1 | sudo tee /proc/sys/vm/page_lock_unfairness
Some wording and general sanity checks were provided by Emin, who is likely to be a big contributor going forward, given his interest in low-level Linux optimizations.
The rest is provided by the Arch Wiki, Phoronix, kernel docs and various bits of knowledge that I've gathered over the years.