Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions src/cargo/core/compiler/build_runner/compilation_files.rs
Original file line number Diff line number Diff line change
Expand Up @@ -444,6 +444,11 @@ impl<'a, 'gctx: 'a> CompilationFiles<'a, 'gctx> {
.map(Arc::clone)
}

/// Returns the path to the acquisition lock, used to avoid multiple Cargos from deadlocking
pub fn acquisition_lock(&self) -> &Path {
self.host.build_dir().acquisition_lock()
}

/// Returns the path where the output for the given unit and `FileType`
/// should be uplifted to.
///
Expand Down
14 changes: 14 additions & 0 deletions src/cargo/core/compiler/build_runner/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,15 @@ impl<'a, 'gctx> BuildRunner<'a, 'gctx> {
self.check_collisions()?;
self.compute_metadata_for_doc_units();

// When -Zfine-grain-locking is enabled, we take an "acquisition lock" exclusively
// that we hold while taking other locks. This lock will block other Cargos from also
// taking locks possibly causing deadlocks. We drop the acquisition lock before we start
// executing jobs allowing other Cargos to compile in parallel if they do not share build
// units.
if self.bcx.gctx.cli_unstable().fine_grain_locking {
self.lock_manager.acquire_acquisition_lock(&self)?;
}

// We need to make sure that if there were any previous docs already compiled,
// they were compiled with the same Rustc version that we're currently using.
// See the function doc comment for more.
Expand All @@ -200,6 +209,11 @@ impl<'a, 'gctx> BuildRunner<'a, 'gctx> {
fingerprint.clear_memoized();
}

// Release acquisition lock allowing other Cargos to proceed
if self.bcx.gctx.cli_unstable().fine_grain_locking {
self.lock_manager.release_acquisition_lock()?;
}

// Now that we've figured out everything that we're going to do, do it!
queue.execute(&mut self)?;

Expand Down
4 changes: 0 additions & 4 deletions src/cargo/core/compiler/job_queue/job_state.rs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry about the maintainability of having side band communication going on.

Is it possible for us to manage this within the existing flow? For example, could we use Poll on Job::run to return early if we couldn't acquire the lock?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, I think it might be tricky to use Poll with the generic structure of Job. (like how to resume the job since internally its a closure)

Though I am happy to look into that to see what is possible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, if we know what jobs will actually build a prior (see #16659 (comment)), then why can't we do all of the lock processing before the build, grabbing our exclusive locks and blocking then, rather than waiting?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thats an interesting idea.
So basically the DrainState could try to lock before calling DrainState::run (running the job)

This is single threaded so it will change the way we take the locks.
I am imagining spawning a thread to take each lock as to avoid blocking the main job queue loop.
I think this could be encapsulated in the LockManager. (and use try_lock() as an optimization to avoid spawning a new thread)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm saying that we acquire the locks in compile and downgrade after build.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats super::compile which currently coordinates locks.

How much does grabbing as we go benefit in practice when the build we block on would hold a shared lock until its done, blocking our job and all dependents anyways? We can build some jobs that don't overlap, so a little?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can build some jobs that don't overlap, so a little?

The benefit we get is when there are build units that do not overlap.
From my perspective, this is that main benefit of fine grain locking over what we currently have.
If we were to wait for everything, I feel like the benefits would be limited to a few more niche scenarios?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The goal we are working towards is non-blocking. If there is overlap, then we are blocking.

So the question then is does the benefit from some overlap justify a more complex architecture (both for this and blocking messages).

Copy link
Member Author

@ranger-ross ranger-ross Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some thinking on this and I think we might be able to get away with simply locking before kicking off jobs.

The primary use case for fine grain locking is to avoid rust-analyzer's cargo check from blocking the users cargo build which slows down iteration during development.

If the user is making small incremental changes to their project, this means full rebuilds should be uncommon. So we will likely take shared locks to all of the dependencies, so we don't need an exclusive lock for those. We just need to take an exclusive lock on the things that changed(dirty units). Build scripts are still a problem with this but I think those will not be dirty unless the user is editing the build script itself.

I think the big question if this is worth the extra complexity of suspending jobs is if we care about build script units (or any other units that are shared between build and check) being blocking.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the changes I mentioned above and I believe they are working correctly. I was able to run a build and check in parallel, as well as couldn't make it deadlock. (though much more testing is needed before stabilizing of course)

Thanks for the reviews and helping guide me in the direction!
I think this approach is much better in term of simplicity while still accomplishing the goals for fine grain locking

I've updated the the PR description and title to better reflect the changes (and minimized the original PR description)

Original file line number Diff line number Diff line change
Expand Up @@ -147,10 +147,6 @@ impl<'a, 'gctx> JobState<'a, 'gctx> {
.push(Message::Finish(self.id, Artifact::Metadata, Ok(())));
}

pub fn lock_exclusive(&self, lock: &LockKey) -> CargoResult<()> {
self.lock_manager.lock(lock)
}

pub fn downgrade_to_shared(&self, lock: &LockKey) -> CargoResult<()> {
self.lock_manager.downgrade_to_shared(lock)
}
Expand Down
7 changes: 7 additions & 0 deletions src/cargo/core/compiler/layout.rs
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,7 @@ impl Layout {
let build_dest = build_dest.as_path_unlocked();
let deps = build_dest.join("deps");
let artifact = deps.join("artifact");
let acquisition_lock = build_dest.join(".acquisition-lock");

let artifact_dir = if must_take_artifact_dir_lock {
// For now we don't do any more finer-grained locking on the artifact
Expand Down Expand Up @@ -323,6 +324,7 @@ impl Layout {
fingerprint: build_dest.join(".fingerprint"),
examples: build_dest.join("examples"),
tmp: build_root.join("tmp"),
acquisition_lock,
_lock: build_dir_lock,
is_new_layout,
},
Expand Down Expand Up @@ -405,6 +407,7 @@ pub struct BuildDirLayout {
examples: PathBuf,
/// The directory for temporary data of integration tests and benches
tmp: PathBuf,
acquisition_lock: PathBuf,
/// The lockfile for a build (`.cargo-lock`). Will be unlocked when this
/// struct is `drop`ped.
///
Expand Down Expand Up @@ -505,4 +508,8 @@ impl BuildDirLayout {
paths::create_dir_all(&self.tmp)?;
Ok(&self.tmp)
}
// Fetch the acquisition lock path
pub fn acquisition_lock(&self) -> &Path {
&self.acquisition_lock
}
}
44 changes: 44 additions & 0 deletions src/cargo/core/compiler/locking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,52 @@ use tracing::instrument;

/// A struct to store the lock handles for build units during compilation.
pub struct LockManager {
acquisition: RwLock<Option<FileLock>>,
locks: RwLock<HashMap<LockKey, FileLock>>,
}

impl LockManager {
pub fn new() -> Self {
Self {
acquisition: RwLock::new(None),
locks: RwLock::new(HashMap::new()),
}
}

/// Acquires the acquisition lock required to call [`LockManager::lock`] and [`LockManager::lock_shared`]
///
/// This should be called prior to attempting lock build units and should be released prior to
/// executing compilation jobs to allow other Cargos to proceed if they do not share any build
/// units.
#[instrument(skip_all)]
pub fn acquire_acquisition_lock(&self, build_runner: &BuildRunner<'_, '_>) -> CargoResult<()> {
let path = build_runner.files().acquisition_lock();
let fs = Filesystem::new(path.to_path_buf());

let lock = fs.open_rw_exclusive_create(&path, build_runner.bcx.gctx, "acquisition lock")?;

let Ok(mut acquisition_lock) = self.acquisition.write() else {
bail!("failed to take acquisition write lock");
};
*acquisition_lock = Some(lock);

Ok(())
}

/// Releases the acquisition lock, see [`LockManager::acquire_acquisition_lock`]
#[instrument(skip_all)]
pub fn release_acquisition_lock(&self) -> CargoResult<()> {
let Ok(mut acquisition_lock) = self.acquisition.write() else {
bail!("failed to take acquisition write lock");
};
assert!(
acquisition_lock.is_some(),
"attempted to release acquisition while it was not taken"
);
*acquisition_lock = None;
Ok(())
}

/// Takes a shared lock on a given [`Unit`]
/// This prevents other Cargo instances from compiling (writing) to
/// this build unit.
Expand All @@ -38,6 +74,10 @@ impl LockManager {
build_runner: &BuildRunner<'_, '_>,
unit: &Unit,
) -> CargoResult<LockKey> {
assert!(
self.acquisition.read().unwrap().is_some(),
"attempted to take shared lock without acquisition lock"
);
let key = LockKey::from_unit(build_runner, unit);
tracing::Span::current().record("key", key.0.to_str());

Expand All @@ -60,6 +100,10 @@ impl LockManager {

#[instrument(skip(self))]
pub fn lock(&self, key: &LockKey) -> CargoResult<()> {
assert!(
self.acquisition.read().unwrap().is_some(),
"attempted to take exclusive lock without acquisition lock"
);
let locks = self.locks.read().unwrap();
if let Some(lock) = locks.get(&key) {
lock.file().lock()?;
Expand Down
30 changes: 15 additions & 15 deletions src/cargo/core/compiler/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -237,19 +237,26 @@ fn compile<'gctx>(
work.then(link_targets(build_runner, unit, true)?)
});

// If -Zfine-grain-locking is enabled, we wrap the job with an upgrade to exclusive
// If -Zfine-grain-locking is enabled, we take an exclusive
// lock before starting, then downgrade to a shared lock after the job is finished.
if build_runner.bcx.gctx.cli_unstable().fine_grain_locking && job.freshness().is_dirty()
{
if let Some(lock) = lock {
// Here we unlock the current shared lock to avoid deadlocking with other cargo
// processes. Then we configure our compile job to take an exclusive lock
// before starting. Once we are done compiling (including both rmeta and rlib)
// we downgrade to a shared lock to allow other cargo's to read the build unit.
// We will hold this shared lock for the remainder of compilation to prevent
// other cargo from re-compiling while we are still using the unit.
// We take an exclusive lock before scheduling the job to keep our locking model
// simple. While this does mean we could potentially be waiting for another job
// when this could begin immediately, it reduces the risk of deadlock.
//
// We are also optimizing for avoiding rust-analyzer's `cargo check` preventing
// the user from running `cargo build` while developing. Generally, only a few
// workspace units will be changing in each build and the units will not be
// shared between `build` and `check` allowing them to run in parallel.
//
// Also note that we unlock before taking the exclusive lock as not all
// platforms support lock upgrading. This is safe as we hold the acquisition
// lock so we should be the only one operating on the locks so we can assume
// that the lock will not be stolen by another instance.
build_runner.lock_manager.unlock(&lock)?;
job.before(prebuild_lock_exclusive(lock.clone()));
build_runner.lock_manager.lock(&lock)?;
Comment on lines +254 to +259
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re: #16657 (comment)

I found the issue on Windows. There is different behavior between Linux and Windows when taking a lock on an EX file that already locked with a SH lock. Linux will allow the lock upgrade while Windows will block.

I re-added the unlock() before the lock() so that we always have a fresh lock. We should be safe from deadlocking since we also hold the acquisition lock

job.after(downgrade_lock_to_shared(lock));
Comment on lines +245 to 260
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as well as couldn't make it deadlock. (though much more testing is needed before stabilizing of course)

The deadlock concern is if you have two cargo checks that acquire the shared lock before either acquires the exclusive lock. At that point, neither of them can. The window for this is short which makes it tricky to observe through one off testing but likely to be seen in the wild, especially with rust-analyzer running in the background.

Copy link
Member Author

@ranger-ross ranger-ross Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we had another profile level lock that we took (exclusively) during the fingerprint checking and while we took the unit level locks. Then dropped that lock before executing compiler jobs.

That would ensure that only a single Cargo instance is in that critical section where dead locking is possible. And like you mention this part of cargo is generally quite fast so I think this should be okay.

If we were to do this, I suppose we could probably reuse the existing .cargo-lock. I can't think of any obvious reasons why this wouldn't work.

What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that should be safe. The one downside I can think of is if we block when getting a unit lock, we now block all other check builds, even where there isn't overlap. Most likely there would be, so maybe its fine for the MVP?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I implemented this in 8d87975

I initially tried to use .cargo-lock but realized that we can't unlock it until the build is done as cargo clean relies on it to avoid cleaning while a build is in progress.
So instead, I created a new .acquisition-lock (not married to the name, we can change if preferred) that we use to coordinate lock acquisition between processes.

}
}
Expand Down Expand Up @@ -615,13 +622,6 @@ fn verbose_if_simple_exit_code(err: Error) -> Error {
}
}

fn prebuild_lock_exclusive(lock: LockKey) -> Work {
Work::new(move |state| {
state.lock_exclusive(&lock)?;
Ok(())
})
}

fn downgrade_lock_to_shared(lock: LockKey) -> Work {
Work::new(move |state| {
state.downgrade_to_shared(&lock)?;
Expand Down
17 changes: 17 additions & 0 deletions tests/testsuite/build_dir.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ fn binary_with_debug() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -95,6 +96,7 @@ fn binary_with_release() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/release/.cargo-lock
[ROOT]/foo/build-dir/release/.acquisition-lock
[ROOT]/foo/build-dir/release/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/build-dir/release/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/build-dir/release/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -205,6 +207,7 @@ fn should_default_to_target() {
[ROOT]/foo/target/.rustc_info.json
[ROOT]/foo/target/CACHEDIR.TAG
[ROOT]/foo/target/debug/.cargo-lock
[ROOT]/foo/target/debug/.acquisition-lock
[ROOT]/foo/target/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/target/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/target/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -234,6 +237,7 @@ fn should_respect_env_var() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -279,6 +283,7 @@ fn build_script_should_output_to_build_dir() {
p.root().join("build-dir").assert_build_dir_layout(str![[r#"
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo.txt
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/build_script_build[..].d
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/build_script_build[..][EXE]
Expand Down Expand Up @@ -342,6 +347,7 @@ fn cargo_tmpdir_should_output_to_build_dir() {
p.root().join("build-dir").assert_build_dir_layout(str![[r#"
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo-[HASH].d
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo.d
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo[..].d
Expand Down Expand Up @@ -402,6 +408,7 @@ fn examples_should_output_to_build_dir_and_uplift_to_target_dir() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-example-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/example-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/example-foo.json
Expand Down Expand Up @@ -448,6 +455,7 @@ fn benches_should_output_to_build_dir() {
p.root().join("build-dir").assert_build_dir_layout(str![[r#"
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo-[HASH].d
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo[..].d
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo-[HASH][EXE]
Expand Down Expand Up @@ -527,6 +535,7 @@ fn cargo_package_should_build_in_build_dir_and_output_to_target_dir() {
p.root().join("build-dir").assert_build_dir_layout(str![[r#"
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -608,6 +617,7 @@ fn cargo_clean_should_clean_the_target_dir_and_build_dir() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -678,6 +688,7 @@ fn cargo_clean_should_remove_correct_files() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/bar/[HASH]/out/bar-[HASH].d
[ROOT]/foo/build-dir/debug/build/bar/[HASH]/out/libbar-[HASH].rlib
[ROOT]/foo/build-dir/debug/build/bar/[HASH]/out/libbar-[HASH].rmeta
Expand Down Expand Up @@ -705,6 +716,7 @@ fn cargo_clean_should_remove_correct_files() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo[..][EXE]
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/out/foo[..].d
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
Expand Down Expand Up @@ -841,6 +853,7 @@ fn template_workspace_root() {
[ROOT]/foo/build-dir/.rustc_info.json
[ROOT]/foo/build-dir/CACHEDIR.TAG
[ROOT]/foo/build-dir/debug/.cargo-lock
[ROOT]/foo/build-dir/debug/.acquisition-lock
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -889,6 +902,7 @@ fn template_cargo_cache_home() {
[ROOT]/home/.cargo/build-dir/.rustc_info.json
[ROOT]/home/.cargo/build-dir/CACHEDIR.TAG
[ROOT]/home/.cargo/build-dir/debug/.cargo-lock
[ROOT]/home/.cargo/build-dir/debug/.acquisition-lock
[ROOT]/home/.cargo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/home/.cargo/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/home/.cargo/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -951,6 +965,7 @@ fn template_workspace_path_hash() {
[ROOT]/foo/foo/[HASH]/build-dir/.rustc_info.json
[ROOT]/foo/foo/[HASH]/build-dir/CACHEDIR.TAG
[ROOT]/foo/foo/[HASH]/build-dir/debug/.cargo-lock
[ROOT]/foo/foo/[HASH]/build-dir/debug/.acquisition-lock
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/bin-foo.json
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/dep-bin-foo
Expand Down Expand Up @@ -1019,6 +1034,7 @@ fn template_workspace_path_hash_should_handle_symlink() {
[ROOT]/foo/foo/[HASH]/build-dir/.rustc_info.json
[ROOT]/foo/foo/[HASH]/build-dir/CACHEDIR.TAG
[ROOT]/foo/foo/[HASH]/build-dir/debug/.cargo-lock
[ROOT]/foo/foo/[HASH]/build-dir/debug/.acquisition-lock
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/dep-lib-foo
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/invoked.timestamp
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/lib-foo
Expand Down Expand Up @@ -1058,6 +1074,7 @@ fn template_workspace_path_hash_should_handle_symlink() {
[ROOT]/foo/foo/[HASH]/build-dir/.rustc_info.json
[ROOT]/foo/foo/[HASH]/build-dir/CACHEDIR.TAG
[ROOT]/foo/foo/[HASH]/build-dir/debug/.cargo-lock
[ROOT]/foo/foo/[HASH]/build-dir/debug/.acquisition-lock
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/dep-lib-foo
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/invoked.timestamp
[ROOT]/foo/foo/[HASH]/build-dir/debug/build/foo/[HASH]/fingerprint/lib-foo
Expand Down