Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BEAMS3D and MPI_CHECK behavior #199

Open
lazersos opened this issue Sep 17, 2023 · 0 comments
Open

BEAMS3D and MPI_CHECK behavior #199

lazersos opened this issue Sep 17, 2023 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@lazersos
Copy link
Collaborator

The code in question is in beams3d_runtime.f90 specifically:

#if defined(MPI_OPT)
        CALL MPI_BARRIER(MPI_COMM_BEAMS,ierr_mpi)
        ALLOCATE(error_array(1:nprocs_beams))
        ierr_mpi = 0
        CALL MPI_ALLGATHER(error_num,1,MPI_INTEGER,error_array,1,MPI_INTEGER,MPI_COMM_BEAMS,ierr_mpi)
        ierr_mpi = 0
        CALL MPI_BARRIER(MPI_COMM_BEAMS,ierr_mpi)
        IF (ANY(error_array .ne. 0)) CALL MPI_FINALIZE(ierr_mpi)
        DEALLOCATE(error_array)
        RETURN
#else
        IF (error_num .eq. MPI_CHECK) RETURN
#endif

So this makes no sense to me. I believe what I intended to do is have create an array of errors, then if any one process had and error, perform an MPI_FINALIZE and otherwise return control to the code. Now there are two issues with this.

  1. This will not do that.
  2. Unless error_array is being defaulted to 0, there is a good chance this will cause the code to crash.

I suspect some of the crashy behavior we've seen on Raven/Cobra is possibly due to this. The fix is rather straightforward and I will work on testing it.

@lazersos lazersos added the bug Something isn't working label Sep 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants