Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allocating multiple blocks to one mpi rank in LB #5026
base: python
Are you sure you want to change the base?
Allocating multiple blocks to one mpi rank in LB #5026
Changes from 12 commits
e76ecf5
067d3fa
6392e3c
9e7f3c9
e3ee829
0135af7
0793276
0c33a15
d40edca
75e9e17
e8d0b1e
281abc2
a55c6bf
cb1561c
42a24e7
e26d439
a91eaf5
a509615
2d221c1
6091226
a6bac85
1b0e7c1
9d9bd13
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and elsewhere, do we really need a default value, considering we already provide a default value in the python class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added default value for the python class
LatticeWalberla
. Andblocks_per_mpi_rank
inLBFluid
is settle byget_value<Utils::Vector3i>(m_lattice->get_parameter("blocks_per_mpi_rank"))
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Using more than one block per MPI rank is not supported for GPU LB" (but why, actually?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about "GPU LB only uses 1 block per MPI rank"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Utils::hadamard_product()
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have rewritten it, as you suggest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better to have functions for this in the Lattice class, so they can be used by EK as well?
After all, LB and EK probably need to agree about both, the mpi and the block decompositoin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cells_per_block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed cells_block to cells_per_block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this do/mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If "keep global block information" is true, each process keeps information about remote blocks that reside on other processes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we surr that there is no funciot in block forest or uniform blcok forest which supplies the answer to this?
If not, and we really have to calculate htis ourselves, it could probably be a bit more compact. Chatgpt came up with:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my understanding, a BlockForest knows the domain size for the whole space, and blocks know their domain range for the entire space. Hence, there is no class that knows the domain range in 1 CPU. Then, I rewrote the code to make it compact.