-
I've encountered this issue before now, but recently run into it again after installing newer version of Caliper (2.5 and the github head). Running an MPI only application with caliper instrumentation and PAPI I get the following error once for each MPI rank: CALIPER: (43): default: mpireport: MPI is already finalized. Cannot aggregate output. We previously resolved this on fortran with cali_flush() calls and CALI_MPIREPORT_WRITE_ON_FINALIZE=false, but those don't help this time, at least not in my C test case. Have the recommended MPI settings changed? I've included all my settings below if that helps. Thanks, export CALI_MPIREPORT_CONFIG="SELECT *,sum(papi.${metric}) GROUP BY prop:nested,mpi.rank FORMAT json" export CALI_MPIREPORT_FILENAME=test.json |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @mastino , By default, Caliper always tries to flush on program exit (after MPI_Finalize), which would produce this error message when using the mpireport service. You can turn that off with The mpireport service should still produce the output if you let it flush at MPI_Finalize (which your setting disables) or you call In a C program, the write-on-finalize option should work. That's only been an issue in Fortran, where we can't directly intercept MPI calls. You can still explicitly call |
Beta Was this translation helpful? Give feedback.
Hi @mastino ,
By default, Caliper always tries to flush on program exit (after MPI_Finalize), which would produce this error message when using the mpireport service. You can turn that off with
CALI_CHANNEL_FLUSH_ON_EXIT=false
. I'd recommend always setting that when using mpireport.The mpireport service should still produce the output if you let it flush at MPI_Finalize (which your setting disables) or you call
cali_flush()
in the program at some point before MPI_Finalize. Did you get the output (i.e., test.json)? If not, there's something else going on.In a C program, the write-on-finalize option should work. That's only been an issue in Fortran, where we can't directly intercept MPI c…