Solution Freezing On 3rd Iteration Onwards When Running Unsteady Dual-Timestepping in Parallel #1915
-
Running unsteady simulations w/ dual-time stepping on many cores causes solution (and residuals) to freeze at 3rd outer iteration After running unsteady simulations with dual-time stepping both in serial and parallel, I discovered an issue where using a relatively larger number of processes led to "frozen" residuals while running. An image of this is shown below: These residual values remain throughout the remaining duration of the run. This does not occur when running in serial. Additionally, as expected, the solution does not change after this 3rd iteration. The serial run results do, and they match the data I was comparing against. I use Intel MPI, where I set
I have noticed on a variety of other simulations I am running that the error trends with # of cores I am using. Bug report checklist
Desktop (please complete the following information):
|
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 2 replies
-
Attached below is also my configuration file, if it is at all helpful. |
Beta Was this translation helpful? Give feedback.
-
Can you add LINSOL to the screen output and post the one one that works and the one that freezes? |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for the quick response. Attached is the serial below which works: Parallel (ran on 15 cores this time, less than the # I used in the screenshot I first posted). This time it appears to occur on the 4th timestep): I stopped output manually ~.10 seconds |
Beta Was this translation helpful? Give feedback.
-
Ok the last number is the residual of the linear solver, 0 means that fgmres is stagnating. |
Beta Was this translation helpful? Give feedback.
Ok the last number is the residual of the linear solver, 0 means that fgmres is stagnating.
You can try lowering the CFL (or using CFL_ADAPT), or you can try running the case steady and then initialize, which should be better for the linear solver.