-
Notifications
You must be signed in to change notification settings - Fork 2.8k
(experiment) Fine grain locking #16089
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(experiment) Fine grain locking #16089
Conversation
|
How do you plan to handle deadlocks? EDIT: Though thinking more... Probably not an issue. I was thinking of cycles, but maybe dev-dep cycles will have a different hash? |
Yeah, my assumption is that there would be no cycles in the unit graph, so if unit is scheduled to run all of it's dependencies have already been built and their locks had been released. |
This comment has been minimized.
This comment has been minimized.
64e5bf3 to
f221f5e
Compare
f221f5e to
3fc0143
Compare
|
I reverted the multiple locks per build unit approach for now. Plan to discuss more about the path forward in the next Cargo team meeting. |
|
Closing this in favor of #16155 |
This PR adds fine grain locking for the build cache using build unit level locking. I'd recommend reading the design details in this description and then reviewing commit by commit. Part of #4282 Previous attempt: #16089 ## Design decisions / rational - Still hold `build-dir/<profile>/.cargo-lock` - to protect against `cargo clean` (exclusive) - changed from exclusive to shared for builds - Using build unit level locking with a single lock per build unit. - Before checking fingerprint freshness we take a shared lock. This prevents reading a fingerprint while another build is active. - For units that are dirty, when the job server queues the job we take an exclusive lock to prevent others from reading while we compile. - This is done by dropping the shared lock and then acquiring an exclusive lock, rather than downgrading the lock, to protect against deadlock, see #16155 (comment) - After the unit's compilation is complete, we downgrade back to a shared lock allowing other readers. - All locks are released at the end of the entire build process - artifact-dir was handled in #16307. For the rational for this design see the discussion [#t-cargo > Build cache and locking design @ 💬](https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/Build.20cache.20and.20locking.20design/near/561677181) ## Open Questions - [ ] Do we need rlimit checks and dynamic rlimits? #16155 (comment) - [ ] Proper handling of blocking message (#16155 (comment)) - Update Dec 18 2025: With updated impl, we now get the blocking message when taking the initial shared lock, but we get no message when taking the exclusive lock right before compiling. - [ ] Reduce parallelism when blocking - [x] How do we want to handle locking on the artifact directory? - We could simply continue using coarse grain locking, locking and unlocking when files are uplifted. - One downside of locking/unlocking multiple times per invocation is that artifact-dir is touch many times across the compilation process (for example, there is a pre-rustc [clean up step](https://github.com/rust-lang/cargo/blob/master/src/cargo/core/compiler/mod.rs#L402) Also we need to take into account other commands like `cargo doc` - Another option would to only take a lock on the artifact-dir for commands that we know will uplift files. (e.g. `cargo check` would not take a lock artifact-dir but `cargo build` would). This would mean that 2 `cargo build` invocations would not run in parallel because one of them would hold the lock artifact-dir (blocking the other). This might actually be ideal to avoid 2 instances fighting over the CPU while recompiling the same crates. - Solved by #16307 - [ ] What should our testing strategy for locking be? - My testing strategy thus far has been to run cargo on dummy projects to verify the locking. - For the max file descriptor testing, I have been using the Zed codebase as a testbed as it has over 1,500 build units which is more than the default ulimit on my linux system. (I am happy to test this on other large codebase that we think would be good to verify against) - It’s not immediately obvious to me as to how to create repeatable unit tests for this or what those tests should be testing for. - For performance testing, I have been using hyperfine to benchmark builds with and without `-Zbuild-dir-new-layout`. With the current implementation I am not seeing any perf regression on linux but I have yet to test on windows/macos. --- <details><summary>Original Design</summary> - Using build unit level locking instead of a temporary working directory. - After experimenting with multiple approaches, I am currently leaning to towards build unit level locking. - The working directory approach introduces a fair bit of uplifting complexity and I further along I pushed my prototype the more I ran into unexpected issues. - mtime changes in fingerprints due to uplifting/downlifting order - tests/benches need to be ran before being uplifted OR uplifted and locked during execution which leads to more locking design needed. (also running pre-uplift introduces other potential side effects like the path displayed to the user being deleted as its temporary) - The trade off here is that with build unit level locks, we need a more advanced locking mechanism and we will have more open locks at once. - The reason I think this is a worth while trade of is that the locking complexity can largely be contained to single module where the uplifting complexity would be spread through out the cargo codebase anywhere we do uplifting. The increased locks count while unavoidable can be mitigated (see below for more details) - Risk of too many locks (file descriptors) - On Linux 1024 is a fairly common default soft limit. Windows is even lower at 256. - Having 2 locks per build unit makes is possible to hit with a moderate amount of dependencies - There are a few mitigations I could think of for this problem (that are included in this PR) - Increasing the file descriptor limits of based on the number of build units (if hard limit is high enough) - Share file descriptors for shared locks across jobs (within a single process) using a virtual lock - This could be implemented using reference counting. - Falling back to coarse grain locking if some heuristic is not met ### Implementation details - We have a stateful lock per build unit made up of multiple file locks `primary.lock` and `secondary.lock` (see [`locking.rs`](http://locking.rs) module docs for more details on the states) - This is needed to enable pipelined builds - We fall back to coarse grain locking if fine grain locking is determined to be unsafe (see `determine_locking_mode()`) - Fine grain locking continues to take the existing `.cargo-lock` lock as RO shared to continue working with older cargo versions while allowing multiple newer cargo instances to run in parallel. - Locking is disabled on network filesystems. (keeping existing behavior from #2623) - `cargo clean` continues to use coarse grain locking for simplicity. - File descriptors - I added functionality to increase the file descriptors if cargo detects that there will not be enough based on the number of build units in the `UnitGraph`. - If we aren’t able to increase a threshold (currently `number of build units * 10`) we automatically fallback to coarse grain locking and display a warning to the user. - I picked 10 times the number of build units a conservative estimate for now. I think lowering this number may be reasonable. - While testing, I was seeing a peak of ~3,200 open file descriptors while compiling Zed. This is approximately x2 the number of build units. - Without the `RcFileLock` I was seeing peaks of ~12,000 open fds which I felt was quiet high even for a large project like Zed. - We use a global `FileLockInterner` that holds on to the file descriptors (`RcFileLock`) until the end of the process. (We could potentially add it to `JobState` if preferred, it would just be a bit more plumbing) See #16155 (comment) for proposal to transition away from this to the current scheme </details>

What does this PR try to resolve?
This is an experiment at adding fine grain locking (at a build unit level) during compilation.
With #15947 merged, this unblocks us to start experimenting with more granular locking tracked in #4282
The primary goal of this PR is to evaluate locking schemes and review their trades offs (i.e. performance, complexity, etc)
Implementation approach / details
The approach is to add a
lockfile to each build unit dir (build-dir/<profile>/build/<pkg>/<hash>/lock) and acquire an exclusive lock during the compilation of that unit as well as a shared lock of all of its dependencies. These locks are taken usingstd::fs::File::{lock, lock_shared}.For this experiment, I found it easier to create the locking from scratch rather than re-using the using locking systems in
FilesystemandCacheLockeras their interfaces requiregctxwhich is out of scope during the actual compilation phase passed toWork::new(). (and plumbinggctxinto it, while possible was a bit annoying due to lifetime issues)I encapsulated all of the locking logic into
CompilationLockinlocking.rs.Note: For now I simply reused the
-Zbuild-dir-new-layoutflag to enable fine grain locking, though we may want a stand alone flag for this in the future.Benchmarking and experimenting
After verifying that the compilation functionality is working, I did some basic benchmarks with hyperfine on a test crate with about ~200 total dependencies to represent a basic small to medium sized crate. Bench marks were run on a Fedora linux x86 machine with a 20 core CPU.
Cargo.toml
(I didn't a lot of thought into the specific dependencies. I simply grabbed some crates a new that had a good amount of transitive dependencies so I did not need at a lot of dependencies manually.)
Results:
From the results above we can see we are taking nearly a ~10% performance hit due to the locking overhead. Which is quiet bad IMO...
Out of curiosity, I also tried taking the shared locks in parallel using
rayon's.par_iter()to see if that would improve the situation.Code Change
However we can see this did really improve it by much if at all.
Another idea I had was to see if taking a lock on the build unit directory (
build-dir/<profile>/build/<pkg>/<hash>) directly instead of writing a dedicated lock file would have any effect. However, this also had minimal if any improvement compared to using a standalone file.I also benchmarked with a larger project with about ~750 dependencies to see how the changes scale with large projects.
Note: This is without rayon and using the
lockfile setup from the first benchmark above.Cargo.toml
Other observations
I also ran a baseline to make sure the performance loss was not coming from layout restructuring (as opposed to adding locking) by running the same bench with out the locking changes. (built from commit 81c3f77)