You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With #423 we removed the synchronization point at the end of each algorithm due to the Pipeline<Communicator>, but we had to introduce comm.clone() for each Pipeline at the beginning of each algorithm in order to keep Communicators independent.
One of the first seed for ideas from the team is to allocate a pool of Communicators associated 1:1 with Pipelines, providing parallelism and safety. Indeed, in this way we will have a restricted set of Communicators, which allows to keep their number limited avoiding an "overload" of MPI while still allowing independent communications. On the other hand, their associated Pipelines ensure that inside the associated Communicator communications are ordered even among different algorithms, because the Pipeline "keeps the history" of the operations and acts as "global" manager for all the algorithms sharing the same Pipeline/Communicator pair.
The starting point seems to be embedding the Pipeline in the Communicator or in the CommunicatorGrid, so that each algorithm can request access to the communicator and it will get its associated Pipeline.
The text was updated successfully, but these errors were encountered:
With #423 we removed the synchronization point at the end of each algorithm due to the
Pipeline<Communicator>
, but we had to introducecomm.clone()
for eachPipeline
at the beginning of each algorithm in order to keepCommunicator
s independent.One of the first seed for ideas from the team is to allocate a pool of
Communicator
s associated 1:1 withPipeline
s, providing parallelism and safety. Indeed, in this way we will have a restricted set ofCommunicators
, which allows to keep their number limited avoiding an "overload" of MPI while still allowing independent communications. On the other hand, their associatedPipeline
s ensure that inside the associatedCommunicator
communications are ordered even among different algorithms, because thePipeline
"keeps the history" of the operations and acts as "global" manager for all the algorithms sharing the samePipeline/Communicator
pair.The starting point seems to be embedding the
Pipeline
in theCommunicator
or in theCommunicatorGrid
, so that each algorithm can request access to the communicator and it will get its associatedPipeline
.The text was updated successfully, but these errors were encountered: