-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GitHub Actions failes with: "no space left on device" #2875
Comments
Hello, @Imipenem |
Thanks @al-cheb will try this |
This worked, thanks @al-cheb |
@Imipenem Having faced this siutation so many times, I decided to stop rewriting the wheel 😁 and wrote a GitHub Actions to do this properly: Open to suggestions. I noticed these lines:
but I am not sure what they remove? |
@jlumbroso the first one doesn't exist anymore on the image. The second one removes all the pre-cached tools (Node, Go, Python, Ruby) that are used by the correspondent actions like https://github.com/actions/setup-python |
Thanks so much @miketimofeev! Thanks to you, I have added an option |
Without it we're getting a "no space left on device" error. See actions/runner-images#2875
* Move program folder to programs/token-auth-rules * Rename packages/sdk to clients/js-solita * Move cli into clients * Add configs folder * Remove old JS configs * Remove old scripts * Update package.json and replace yarn with pnpm * Update GitHub workflows * Format config folder * Add Amman client back to JS Solita client * Add Umi JS client * Regenerate client with correct program ID * Add .env file from GitHub actions * Fix CLI dependencies * Enable Umi library publishing on NPM * Use buildjet runner when testing programs Without it we're getting a "no space left on device" error. See actions/runner-images#2875
14Gb default space is probably not enough for us anymore ``` [ec2-user@ip-172-31-49-57 ipa]$ du -sh target/ 36G target/ ``` see the discussion here actions/runner-images#2875
14Gb default space is probably not enough for us anymore ``` [ec2-user@ip-172-31-49-57 ipa]$ du -sh target/ 36G target/ ``` see the discussion here actions/runner-images#2875
14Gb default space is probably not enough for us anymore ``` [ec2-user@ip-172-31-49-57 ipa]$ du -sh target/ 36G target/ ``` see the discussion here actions/runner-images#2875
14Gb default space is probably not enough for us anymore ``` [ec2-user@ip-172-31-49-57 ipa]$ du -sh target/ 36G target/ ``` see the discussion here actions/runner-images#2875
…-hosted runners (#38) We're hitting `no space left on device` errors during llvm and stablehlo builds with the free GitHub hosted runners, which come with limited disk space allocations. Cleaning up the runners as recommended [here](https://github.com/orgs/community/discussions/25678), [here](https://github.com/orgs/community/discussions/26351) and actions/runner-images#2875 have helped partially (increase root mount from 20G to 47G) but it still isn't enough for some builds. Temporarily disable the 3p builds on push to main (they will still run as non-merge gating on PRs), and re-enable once self-hosted runners are setup.
Description
A clear and concise description of what the bug is, and why you consider it to be a bug.
I recently noticed that one of my GitHub workflows failed with the above mentioned error.
Link to one of those runs: https://github.com/Imipenem/mypytest/runs/2060441074.
i followed #709 and added:
sudo rm -rf "/usr/local/share/boost"
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
But this lead to the error that I could not execute some python packages etc which is crucial for my workflow.
So I decided to remove some space consuming packages, but the error still persists.
Area for Triage:
Question, Bug, or Feature?:
Bug
Virtual environments affected
Image version
Image version where you are experiencing the issue.
Version: 20210302.0
Expected behavior
A description of what you expected to happen.
The workflow should pass.
Actual behavior
A description of what is actually happening.
Workflow keeps failing with running out of space
Repro steps
A description with steps to reproduce the issue. If your have a public example or repo to share,
please provide the link.
Exact error:
Error processing tar file(exit status 1): write /home/user/miniconda/envs/mypytest/lib/python3.9/site-packages/torch/lib/libtorch_cuda_cu.so: no space left on device Error: exit status 1
Any ideas @miketimofeev or others?
Thanks ;)
The text was updated successfully, but these errors were encountered: