mpi4py.MPI.Exception: MPI_ERR_TRUNCATE: message truncated

1,476 views
Skip to first unread message

Jayant Sahewal

unread,
Oct 15, 2016, 2:25:28 PM10/15/16
to mpi4py
Hi All,

I converted the ring_c.c code from OPENMPI examples in python to experiment with mpi4py. Here is my code.

from mpi4py import MPI


comm
= MPI.COMM_WORLD
rank
= comm.Get_rank()
size
= comm.Get_size()


next_proc
= (rank + 1) % size
prev_proc
= (rank + size - 1) % size


tag
= 2
message
= 10


if 0 == rank:
    comm
.send(message, dest=next_proc, tag=tag)


while(1):
    message
= comm.recv(message, source=prev_proc, tag=tag)  
    comm
.Recv_init
   
   
if 0 == rank:
        message
= message - 1
       
print "Process %d decremented value: %d\n" %(rank, message)
   
    comm
.send(message, dest=next_proc, tag=tag)
   
   
if 0 == message:
       
print "Process %d exiting\n" %(rank)
       
break;


if 0 == rank:
    message
= comm.recv(message, source=prev_proc, tag=tag)


When I run it via mpiexec for any number of processes, for example
mpiexec -n 10 python ring_py.py

It gives the following output and error:

Process 0 decremented value: 9


Process 0 decremented value: 8


Process 0 decremented value: 7


Process 0 decremented value: 6


Process 0 decremented value: 5


Process 0 decremented value: 4


Traceback (most recent call last):
 
File "ring_py.py", line 20, in <module>
    message
= comm.recv(message, source=prev_proc, tag=tag)  
 
File "MPI/Comm.pyx", line 1192, in mpi4py.MPI.Comm.recv (src/mpi4py.MPI.c:106889)
 
File "MPI/msgpickle.pxi", line 287, in mpi4py.MPI.PyMPI_recv (src/mpi4py.MPI.c:42965)
mpi4py
.MPI.Exception: MPI_ERR_TRUNCATE: message truncated

A couple of observations
1) If I change the message to say 6 or 50, it always errors out at Process 0 decremented value: 4
2) The number of processes has no effect on the output of the code, other than the execution time.

A few details about my system
1) mpi4py version is 2.0.0 which I installed via PIP.
2) I am using a MacBook Air 2012 model with macOS Sierra and intel Core i7 processor. 

mpirun --version
mpirun
(Open MPI) 2.0.1

Can someone please help me understand what is going on with my code.

Thank you,
Jayant

Aron Ahmadia

unread,
Oct 15, 2016, 2:43:07 PM10/15/16
to mpi4py
I'm having a hard time understanding what's going on in that example code. MPI_Recv is blocking and it looks like all the processes are launching a blocking receive before a corresponding send has been posted. 

This seems to be a problem with the original example as well: https://github.com/open-mpi/ompi/blob/master/examples/ring_c.c

What am I missing?

A

--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+un...@googlegroups.com.
To post to this group, send email to mpi...@googlegroups.com.
Visit this group at https://groups.google.com/group/mpi4py.
To view this discussion on the web visit https://groups.google.com/d/msgid/mpi4py/606c44fa-4501-4a88-8604-22fee6b2c115%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Lisandro Dalcin

unread,
Oct 15, 2016, 2:47:16 PM10/15/16
to mpi4py
You have to use ``comm.recv(None, source, tag)``.

If you use ``comm.recv(N, ...)``, where N is an integer, it means a totally different thing (an upper-bound to the number of bytes for the incoming pickled message). 

This "feature" is really error prone, it is there as a workaround for old times where MPI implementations do not have matched probes (introduced in MPI-3), and receiving pickled messages was not thread safe (something I fixed recently if you run with pre-MPI-3 implementations).

I'm considering removing this feature for the next release, but for backward compatibility reason, I think we need to keep the "buf" arg of recv(), however we could emit a warning if the user pass "buf" with anything other than "None". 

What do you think, guys? 



--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+unsubscribe@googlegroups.com.

To post to this group, send email to mpi...@googlegroups.com.
Visit this group at https://groups.google.com/group/mpi4py.
To view this discussion on the web visit https://groups.google.com/d/msgid/mpi4py/606c44fa-4501-4a88-8604-22fee6b2c115%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Lisandro Dalcin
============
Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Extreme Computing Research Center (ECRC)
King Abdullah University of Science and Technology (KAUST)
http://ecrc.kaust.edu.sa/

4700 King Abdullah University of Science and Technology
al-Khawarizmi Bldg (Bldg 1), Office # 0109
Thuwal 23955-6900, Kingdom of Saudi Arabia
http://www.kaust.edu.sa

Office Phone: +966 12 808-0459

Aron Ahmadia

unread,
Oct 15, 2016, 2:48:42 PM10/15/16
to mpi4py
Okay I understand. The message will eventually "hit" each process as they wait in the ring. 

(Also reading Lisandro's post, let me chew on that for a bit).

Aron Ahmadia

unread,
Oct 15, 2016, 2:50:55 PM10/15/16
to mpi4py
What about issuing the warning if the message is truncated by the buf argument? I'm not sure there's a good argument for honoring the buf kwarg in the lowercase receive method. 

Lisandro Dalcin

unread,
Oct 15, 2016, 2:51:24 PM10/15/16
to mpi4py
On 15 October 2016 at 21:48, Aron Ahmadia <ar...@ahmadia.net> wrote:
Okay I understand. The message will eventually "hit" each process as they wait in the ring. 

(Also reading Lisandro's post, let me chew on that for a bit).


To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+unsubscribe@googlegroups.com.

To post to this group, send email to mpi...@googlegroups.com.
Visit this group at https://groups.google.com/group/mpi4py.
To view this discussion on the web visit https://groups.google.com/d/msgid/mpi4py/606c44fa-4501-4a88-8604-22fee6b2c115%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+unsubscribe@googlegroups.com.

To post to this group, send email to mpi...@googlegroups.com.
Visit this group at https://groups.google.com/group/mpi4py.

For more options, visit https://groups.google.com/d/optout.

Lisandro Dalcin

unread,
Oct 15, 2016, 2:53:04 PM10/15/16
to mpi4py

On 15 October 2016 at 21:50, Aron Ahmadia <ar...@ahmadia.net> wrote:
What about issuing the warning if the message is truncated by the buf argument? I'm not sure there's a good argument for honoring the buf kwarg in the lowercase receive method. 

There is no way to know in advance if the message was truncated. If it was, then a quality MPI implementation errors, as the OP posted.

Again, in an MPI-3 implementation, by default, I use a totally different approach based on MPI_MProbe().

Aron Ahmadia

unread,
Oct 15, 2016, 2:59:28 PM10/15/16
to mpi4py
Okay, I understand. Yes, I'd suggest deprecating the 'buf' kwarg here, since it's been superceded in MPI 3. Of course, we could probably fix this just by improving the documentation on the recv method :)

--
You received this message because you are subscribed to the Google Groups "mpi4py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpi4py+un...@googlegroups.com.
To post to this group, send email to mpi...@googlegroups.com.
Visit this group at https://groups.google.com/group/mpi4py.

Lisandro Dalcin

unread,
Oct 15, 2016, 3:57:13 PM10/15/16
to mpi4py
On 15 October 2016 at 21:59, Aron Ahmadia <ar...@ahmadia.net> wrote:
Okay, I understand. Yes, I'd suggest deprecating the 'buf' kwarg here, since it's been superceded in MPI 3.

 
Of course, we could probably fix this just by improving the documentation on the recv method :)


"Documentation"? what is that? Use the source, Luke... :-)



-- 
Reply all
Reply to author
Forward
0 new messages