Skip to content

Commit 9473d7b

Browse files
authored
Refactor barrier interface (#650)
This PR refactors MMTk's write barrier interface. MMTk now contains three sets of different write barrier APIs: * **Subsuming write barrer**. The barrier replaces the store operation in VMs and does the store in Rust. Used for quick (but slow) implementation for new bindings. The barrier implementation in Rust should contain both fast-path and a call to slow-path. * **Full pre/post write barrier**. Used for VMs (like OpenJDK) that cannot support subsuming barrier easily. The pre barrier will be called before the store operation, and the post barrier will be called after the store. The barrier implementation in Rust should contain both fast-path and a call to slow-path. * **Barrier Slow-path**. For performance optimization, VMs may implement the store and barrier fast-path in their IR, and only call into Rust for a slow-path call.
1 parent 5c7445a commit 9473d7b

File tree

15 files changed

+779
-224
lines changed

15 files changed

+779
-224
lines changed

src/memory_manager.rs

Lines changed: 146 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@
1414
use crate::mmtk::MMTKBuilder;
1515
use crate::mmtk::MMTK;
1616
use crate::plan::AllocationSemantics;
17-
use crate::plan::BarrierWriteTarget;
1817
use crate::plan::{Mutator, MutatorContext};
1918
use crate::scheduler::WorkBucketStage;
2019
use crate::scheduler::{GCController, GCWork, GCWorker};
@@ -24,6 +23,7 @@ use crate::util::heap::layout::vm_layout_constants::HEAP_END;
2423
use crate::util::heap::layout::vm_layout_constants::HEAP_START;
2524
use crate::util::opaque_pointer::*;
2625
use crate::util::{Address, ObjectReference};
26+
use crate::vm::edge_shape::MemorySlice;
2727
use crate::vm::ReferenceGlue;
2828
use crate::vm::VMBinding;
2929
use std::sync::atomic::Ordering;
@@ -160,18 +160,159 @@ pub fn post_alloc<VM: VMBinding>(
160160
mutator.post_alloc(refer, bytes, semantics);
161161
}
162162

163+
/// The *subsuming* write barrier by MMTk. For performance reasons, a VM should implement the write barrier
164+
/// fast-path on their side rather than just calling this function.
165+
///
166+
/// For a correct barrier implementation, a VM binding needs to choose one of the following options:
167+
/// * Use subsuming barrier `object_reference_write`
168+
/// * Use both `object_reference_write_pre` and `object_reference_write_post`, or both, if the binding has difficulty delegating the store to mmtk-core with the subsuming barrier.
169+
/// * Implement fast-path on the VM side, and call the generic api `object_reference_slow` as barrier slow-path call.
170+
/// * Implement fast-path on the VM side, and do a specialized slow-path call.
171+
///
172+
/// Arguments:
173+
/// * `mutator`: The mutator for the current thread.
174+
/// * `src`: The modified source object.
175+
/// * `slot`: The location of the field to be modified.
176+
/// * `target`: The target for the write operation.
177+
#[inline(always)]
178+
pub fn object_reference_write<VM: VMBinding>(
179+
mutator: &mut Mutator<VM>,
180+
src: ObjectReference,
181+
slot: VM::VMEdge,
182+
target: ObjectReference,
183+
) {
184+
mutator.barrier().object_reference_write(src, slot, target);
185+
}
186+
187+
/// The write barrier by MMTk. This is a *pre* write barrier, which we expect a binding to call
188+
/// *before* it modifies an object. For performance reasons, a VM should implement the write barrier
189+
/// fast-path on their side rather than just calling this function.
190+
///
191+
/// For a correct barrier implementation, a VM binding needs to choose one of the following options:
192+
/// * Use subsuming barrier `object_reference_write`
193+
/// * Use both `object_reference_write_pre` and `object_reference_write_post`, or both, if the binding has difficulty delegating the store to mmtk-core with the subsuming barrier.
194+
/// * Implement fast-path on the VM side, and call the generic api `object_reference_slow` as barrier slow-path call.
195+
/// * Implement fast-path on the VM side, and do a specialized slow-path call.
196+
///
197+
/// Arguments:
198+
/// * `mutator`: The mutator for the current thread.
199+
/// * `src`: The modified source object.
200+
/// * `slot`: The location of the field to be modified.
201+
/// * `target`: The target for the write operation.
202+
#[inline(always)]
203+
pub fn object_reference_write_pre<VM: VMBinding>(
204+
mutator: &mut Mutator<VM>,
205+
src: ObjectReference,
206+
slot: VM::VMEdge,
207+
target: ObjectReference,
208+
) {
209+
mutator
210+
.barrier()
211+
.object_reference_write_pre(src, slot, target);
212+
}
213+
163214
/// The write barrier by MMTk. This is a *post* write barrier, which we expect a binding to call
164-
/// *after* they modify an object. For performance reasons, a VM should implement the write barrier
215+
/// *after* it modifies an object. For performance reasons, a VM should implement the write barrier
165216
/// fast-path on their side rather than just calling this function.
166217
///
167-
/// TODO: We plan to replace this API with a subsuming barrier API.
218+
/// For a correct barrier implementation, a VM binding needs to choose one of the following options:
219+
/// * Use subsuming barrier `object_reference_write`
220+
/// * Use both `object_reference_write_pre` and `object_reference_write_post`, or both, if the binding has difficulty delegating the store to mmtk-core with the subsuming barrier.
221+
/// * Implement fast-path on the VM side, and call the generic api `object_reference_slow` as barrier slow-path call.
222+
/// * Implement fast-path on the VM side, and do a specialized slow-path call.
168223
///
169224
/// Arguments:
170225
/// * `mutator`: The mutator for the current thread.
226+
/// * `src`: The modified source object.
227+
/// * `slot`: The location of the field to be modified.
171228
/// * `target`: The target for the write operation.
172229
#[inline(always)]
173-
pub fn post_write_barrier<VM: VMBinding>(mutator: &mut Mutator<VM>, target: BarrierWriteTarget) {
174-
mutator.barrier().post_write_barrier(target)
230+
pub fn object_reference_write_post<VM: VMBinding>(
231+
mutator: &mut Mutator<VM>,
232+
src: ObjectReference,
233+
slot: VM::VMEdge,
234+
target: ObjectReference,
235+
) {
236+
mutator
237+
.barrier()
238+
.object_reference_write_post(src, slot, target);
239+
}
240+
241+
/// The *subsuming* memory region copy barrier by MMTk.
242+
/// This is called when the VM tries to copy a piece of heap memory to another.
243+
/// The data within the slice does not necessarily to be all valid pointers,
244+
/// but the VM binding will be able to filter out non-reference values on edge iteration.
245+
///
246+
/// For VMs that performs a heap memory copy operation, for example OpenJDK's array copy operation, the binding needs to
247+
/// call `memory_region_copy*` APIs. Same as `object_reference_write*`, the binding can choose either the subsuming barrier,
248+
/// or the pre/post barrier.
249+
///
250+
/// Arguments:
251+
/// * `mutator`: The mutator for the current thread.
252+
/// * `src`: Source memory slice to copy from.
253+
/// * `dst`: Destination memory slice to copy to.
254+
///
255+
/// The size of `src` and `dst` shoule be equal
256+
#[inline(always)]
257+
pub fn memory_region_copy<VM: VMBinding>(
258+
mutator: &'static mut Mutator<VM>,
259+
src: VM::VMMemorySlice,
260+
dst: VM::VMMemorySlice,
261+
) {
262+
debug_assert_eq!(src.bytes(), dst.bytes());
263+
mutator.barrier().memory_region_copy(src, dst);
264+
}
265+
266+
/// The *generic* memory region copy *pre* barrier by MMTk, which we expect a binding to call
267+
/// *before* it performs memory copy.
268+
/// This is called when the VM tries to copy a piece of heap memory to another.
269+
/// The data within the slice does not necessarily to be all valid pointers,
270+
/// but the VM binding will be able to filter out non-reference values on edge iteration.
271+
///
272+
/// For VMs that performs a heap memory copy operation, for example OpenJDK's array copy operation, the binding needs to
273+
/// call `memory_region_copy*` APIs. Same as `object_reference_write*`, the binding can choose either the subsuming barrier,
274+
/// or the pre/post barrier.
275+
///
276+
/// Arguments:
277+
/// * `mutator`: The mutator for the current thread.
278+
/// * `src`: Source memory slice to copy from.
279+
/// * `dst`: Destination memory slice to copy to.
280+
///
281+
/// The size of `src` and `dst` shoule be equal
282+
#[inline(always)]
283+
pub fn memory_region_copy_pre<VM: VMBinding>(
284+
mutator: &'static mut Mutator<VM>,
285+
src: VM::VMMemorySlice,
286+
dst: VM::VMMemorySlice,
287+
) {
288+
debug_assert_eq!(src.bytes(), dst.bytes());
289+
mutator.barrier().memory_region_copy_pre(src, dst);
290+
}
291+
292+
/// The *generic* memory region copy *post* barrier by MMTk, which we expect a binding to call
293+
/// *after* it performs memory copy.
294+
/// This is called when the VM tries to copy a piece of heap memory to another.
295+
/// The data within the slice does not necessarily to be all valid pointers,
296+
/// but the VM binding will be able to filter out non-reference values on edge iteration.
297+
///
298+
/// For VMs that performs a heap memory copy operation, for example OpenJDK's array copy operation, the binding needs to
299+
/// call `memory_region_copy*` APIs. Same as `object_reference_write*`, the binding can choose either the subsuming barrier,
300+
/// or the pre/post barrier.
301+
///
302+
/// Arguments:
303+
/// * `mutator`: The mutator for the current thread.
304+
/// * `src`: Source memory slice to copy from.
305+
/// * `dst`: Destination memory slice to copy to.
306+
///
307+
/// The size of `src` and `dst` shoule be equal
308+
#[inline(always)]
309+
pub fn memory_region_copy_post<VM: VMBinding>(
310+
mutator: &'static mut Mutator<VM>,
311+
src: VM::VMMemorySlice,
312+
dst: VM::VMMemorySlice,
313+
) {
314+
debug_assert_eq!(src.bytes(), dst.bytes());
315+
mutator.barrier().memory_region_copy_post(src, dst);
175316
}
176317

177318
/// Return an AllocatorSelector for the given allocation semantic. This method is provided

0 commit comments

Comments
 (0)