Skip to content

refactoring transport benchmarks for new UCX #10

Open
angainor wants to merge 8 commits intoboeschf:ucxfrom
angainor:m_boeschf_ucx
Open

refactoring transport benchmarks for new UCX #10
angainor wants to merge 8 commits intoboeschf:ucxfrom
angainor:m_boeschf_ucx

Conversation

@angainor
Copy link

Most changes are cosmetic, but some commits are important. Will comment on those separately.

m_db.init(m_worker.address());
}

~transport_context()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@boeschf We must destroy the ucp workers before the context, otherwise UCX segfaults. So order of destructors is important. I added a simple loop here, but you might want to do it in some other way.

Copy link
Owner

@boeschf boeschf Dec 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is solved now (I forgot the set the m_moved flag to false)


ucp_worker_handle& operator=(ucp_worker_handle&& other) noexcept
{
destroy();
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did not really understand why this was here, and why you mark m_moved and then conditionally call ucp_worker_destroy(). The problem is that the workers were always moved, and somehow the destroy() never worked at the end of the program.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is ok here (in case we're move assigning a new worker to an existing worker, the existing worker should be destroyed)

return m_req->m_worker->m_parallel_context->thread_primitives().critical(
[this]()
{
if if (!(*m_req->m_completed))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@boeschf we cannot reference m_req here, because it might have been freed already.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes we can, we check earlier and bail out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants