Yes, this behavior is expected. It should be related to the
implementation of the progress engine of your MPI. Could you try
change you code to perform a regular send BUT using a thread? Take a
look at the code in demo/threads
(http://code.google.com/p/mpi4py/source/browse/trunk/demo/threads/sendrecv.py)...
Note that you will need an MPI implementation that REALLY works safe
with threads (e.g. MPICH2 should do the job).
Below is my version, please improve it as your convenience.
# File mpitest.py begin
from mpi4py import MPI
import time
comm = MPI.COMM_WORLD
reqlist = []
data = ['myid_particles:'+str(comm.rank)]*10000000
otherrank = 1 if comm.rank==0 else 0
import threading
def _send():
global time0, time1
comm.send(data,dest=otherrank,tag=1)
time1 = time.time() - time0
send_thread = threading.Thread(target=_send)
time0 = time1 = time2 = time3 = 0
time0 = time.time()
send_thread.start()
if comm.rank==1:
time.sleep(10)
time2 = time.time() - time0
a = comm.recv(source=otherrank,tag=1)
time3 = time.time() - time0
send_thread.join()
print str(comm.rank)+': send at t = '+str(time1)
print str(comm.rank)+': recv at t = ('+str(time2)+','+str(time3)+')'
# END
--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169