开发者

What is the best way to transfer data (Real and Integer arrays) between two runnings fortran programs on the same machine?

开发者 https://www.devze.com 2023-02-13 04:00 出处:网络
We are currently using fil开发者_Python百科e I/O but need a better/faster way. Sample code would be appreciated. By using files for transfer, you\'re already implementing a form of message passing, an

We are currently using fil开发者_Python百科e I/O but need a better/faster way. Sample code would be appreciated.


By using files for transfer, you're already implementing a form of message passing, and so I think that would be the most natural fit for this sort of program. Now, you could write something yourself that uses shared memory when available and something like TCP/IP when not - or you could just use a library that already does that, like MPI, which is widely available, works, will take advantage of shared memory if you are running on the same machine, but would then also extend to letting you run them on different machines entirely without you changing your code.

So as a simple example of one program sending data to a second and then waiting for data back, we'd have two programs as follows; first.f90

program first

    use protocol
    use mpi
    implicit none
    real, dimension(n,m) :: inputdata
    real, dimension(n,m) :: processeddata
    integer :: rank, comsize, ierr, otherrank
    integer :: rstatus(MPI_STATUS_SIZE)


    call MPI_INIT(ierr)
    call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
    call MPI_COMM_SIZE(MPI_COMM_WORLD, comsize, ierr)

    if (comsize /= 2) then 
        print *,'Error: this assumes n=2!'
        call MPI_ABORT(1,MPI_COMM_WORLD,ierr)
    endif

    !! 2 PEs; the other is 1 if we're 0, or 0 if we're 1.
    otherrank = comsize - (rank+1)

    inputdata = 1.
    inputdata = exp(sin(inputdata))

    print *, rank, ': first: finished computing; now sending to second.'
    call MPI_SEND(inputdata, n*m, MPI_REAL, otherrank, firsttag, &
                  MPI_COMM_WORLD, ierr)
    print *, rank, ': first: Now waiting for return data...'
    call MPI_RECV(processeddata, n*m, MPI_REAL, otherrank, backtag, &
                  MPI_COMM_WORLD, rstatus, ierr)
    print *, rank, ': first: recieved data from partner.'

    call MPI_FINALIZE(ierr)

end program first

and second.f90:

program second

    use protocol
    use mpi
    implicit none
    real, dimension(n,m) :: inputdata
    real, dimension(n,m) :: processeddata
    integer :: rank, comsize, ierr, otherrank
    integer :: rstatus(MPI_STATUS_SIZE)

    call MPI_INIT(ierr)
    call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
    call MPI_COMM_SIZE(MPI_COMM_WORLD, comsize, ierr)

    if (comsize /= 2) then 
        print *,'Error: this assumes n=2!'
        call MPI_ABORT(1,MPI_COMM_WORLD,ierr)
    endif

    !! 2 PEs; the other is 1 if we're 0, or 0 if we're 1.
    otherrank = comsize - (rank+1)

    print *, rank, ': second: Waiting for initial data...'
    call MPI_RECV(inputdata, n*m, MPI_REAL, otherrank, firsttag, &
                  MPI_COMM_WORLD, rstatus, ierr)

    print *, rank, ': second: adding 1 and sending back.'
    processeddata = inputdata + 1 
    call MPI_SEND(processeddata, n*m, MPI_REAL, otherrank, backtag, &
                  MPI_COMM_WORLD, ierr)

    print *, rank, ': second: completed'

    call MPI_FINALIZE(ierr)

end program second

For clarity, stuff that the two programs must agree on could be ina module they both use, here protocol.f90:

module protocol
    !! shared information like tag ids, etc goes here

    integer, parameter :: firsttag = 1
    integer, parameter :: backtag  = 2

    !! size of problem
    integer, parameter :: n = 10, m = 20 
end module protocol 

(A makefile for building the executables follows:)

all: first second

FFLAGS=-g -Wall
F90=mpif90

%.mod: %.f90
        $(F90) -c $(FFLAGS) $^    

%.o: %.f90
        $(F90) -c $(FFLAGS) $^    

first: protocol.mod first.o
        $(F90) -o $@ first.o protocol.o

second: protocol.mod second.o
        $(F90) -o $@ second.o protocol.o

clean:
        rm -rf *.o *.mod

and then you run the two programs as following:

$ mpiexec -n 1 ./first : -n 1 ./second
           1 : second: Waiting for initial data...
           0 : first: finished computing; now sending to second.
           0 : first: Now waiting for return data...
           1 : second: adding 1 and sending back.
           1 : second: completed
           0 : first: recieved data from partner.
$

We could certainly give you a more relevant example if you give us more information about the workflow between the two programs.


Are you using binary (unformatted) file I/O? Unless the data quantity is huge, that should be fast.

Otherwise you could use interprocess communication, but it would be more complicated. You might find code in C, which you could call from Fortran using the ISO C Binding.

0

精彩评论

暂无评论...
验证码 换一张
取 消