Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't share caches between different jobs by default #22

Closed
TimJentzsch opened this issue Aug 2, 2024 · 3 comments · Fixed by #23
Closed

Don't share caches between different jobs by default #22

TimJentzsch opened this issue Aug 2, 2024 · 3 comments · Fixed by #23
Labels
bug Something isn't working

Comments

@TimJentzsch
Copy link
Collaborator

Consider this scenario:

You have one job which runs cargo test and one which runs cargo check.
These jobs will need separate compilations, as they run with different profiles.

Right now it can happen that they share a cache: Maybe cargo check runs first and then cargo test cannot find a cache, but _falls backto the cache generated bycargo check`.

This can bloat the cache size, as now the target folder is filled with both.
This problem can accumulate as the cache is never cleaned up.

A quick solution would be to remove the last cache key fallback which does not depend on the job ID. Then, the cargo check and cargo test jobs would never share a cache.

This issue has been discovered via TheBevyFlock/bevy_quickstart#212

@benfrankel
Copy link
Contributor

I actually wonder if it would be best to fully remove cache fallbacks (if it's not possible to prune the target/ directory automatically). Even if it's just the Cargo.toml file changing multiple times, it could cause cache size ballooning with unused dependencies / versions getting pulled into the new cache, though maybe not as rapidly as fallbacks between different jobs.

Needs some testing and consideration of the tradeoffs to decide, though.

@TimJentzsch
Copy link
Collaborator Author

Hmm, definitely needs some testing. Compiling without a cache is definitely taking a long time, but maybe it's worth the saved storage space?

@benfrankel
Copy link
Contributor

benfrankel commented Aug 5, 2024

For some anecdotal data: My jam game just had a failed CI due to the runner running out of storage space. I checked and my caches were all ~3.2GB each. I deleted the saved caches and re-ran the same CI workflow, and the new caches were ~400MB, 400MB, and 1.2GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants