Simulation terminated with exit code: 137

2,846 views
Skip to first unread message

Baba

unread,
Jan 17, 2012, 7:27:13 AM1/17/12
to omnetpp
Hi,

I'm new to OMNET++, i implemented a protocol and during my simulation
i had this error message:

Simulation terminated with exit code: 137
Working directory: /home/diarra/Desktop/myWorkspace/PeerSampling
Command line: PeerSampling -r 0 -u Cmdenv -c General -n .:../../
software/inet/examples:../../software/inet/src -l ../../software/inet/
src/inet omnetpp.ini

I don't know what kind of error it is. Please someone can help me?


Thanks

Rudolf Hornig

unread,
Jan 17, 2012, 7:59:12 AM1/17/12
to omn...@googlegroups.com
On unix exit codes above 128 mean: 128 + SIGNAL number.

137 means that your process received a kill signal...


--
Sent from the OMNeT++ mailing list. To configure your membership,
visit http://groups.google.com/group/omnetpp

Krishna Oza

unread,
Oct 11, 2012, 8:51:04 AM10/11/12
to omn...@googlegroups.com
Whats the solution for that. bcoz I am also running simulations on inetmanet and got the same error. But I have near abt 600 adhost running OLSR and sending TCP data to each other

Anjanreddy Kasireddy

unread,
Oct 11, 2012, 8:52:13 AM10/11/12
to omn...@googlegroups.com
        hi what about 139 can anyone tell me the solution please?

--
--
Sent from the OMNeT++ mailing list. To configure your membership,
visit http://groups.google.com/group/omnetpp
 
 
 



--
With Regards,

K ANJAN REDDY,

M-Tech , IIT MADRAS

Rudolf Hornig

unread,
Oct 11, 2012, 9:20:17 AM10/11/12
to omn...@googlegroups.com
Kill signal is usually sent by the opsystem if you run out of memory... I assume you have no page file and run out of memory...

Rudolf Hornig

unread,
Oct 11, 2012, 9:22:01 AM10/11/12
to omn...@googlegroups.com
signal 11 = sigsegv = you have a bug in the model which corrupts the memory. 

You have to debug your model.

Krishna Oza

unread,
Oct 12, 2012, 2:42:58 AM10/12/12
to omn...@googlegroups.com
What is the solution for this is this the issue with omnet development or the OS. if it is OS pls let us know how to fix it

Rudolf Hornig

unread,
Oct 13, 2012, 11:41:59 AM10/13/12
to omn...@googlegroups.com


On Friday, October 12, 2012 8:42:58 AM UTC+2, Krishna Oza wrote:
What is the solution for this is this the issue with omnet development or the OS. if it is OS pls let us know how to fix it

Run it on a machine which has enough memory for your simulation.
Rudolf

Krishna Oza

unread,
Dec 19, 2012, 12:19:40 AM12/19/12
to omn...@googlegroups.com
I am having a intel Xeon @ 3.20 Ghz with 4gb ram and 10 gb swap. Just out of curiosity for a 200 node network heavily loaded with traffic what configuration you would suggest. Bcoz when I monitor ram and swap both gets exhausted.  Can I go for any kind of optimization so that less memory would be required. 

Rudolf Hornig

unread,
Dec 19, 2012, 4:58:45 AM12/19/12
to omn...@googlegroups.com
I cannot guess it from far away, however here are some hints: 
- Try to run the simulation first with smaller amount of nodes and observe whether the memory requirement of the process is stable during the whole simulation. (i.e. try to find a small enough network that is still running on your machine). Check how much memory is required by the process. If the memory ins constantly growing no matter how small the network is then you have a memory leak bug somewhere in the code.

If you can find a working network size, experiment with the netwokr size up and down and see the changes in memory consuption. At least you would have a general idea about the actual limits...

I'm not sure about the limits, but theoretically ADHOC routing protocols are o(2) i.e. the memory consumption is quadratic as size of the routing table is proportional to the number of nodes in each node...


It could be caused also by the heavy traffic. Of you have a component somewhere which does not limit the queue length and you try to put more and more data into it, it will finally consume all memory...

So if you see ever increasing memory consuption (i.e. it never stabilizes), then I would suspect that too heavy traffic is causing the memory loss. (in this case you should configure the offending queues to start dropping packets after a while)

Rudolf

Sabari Nathan

unread,
Apr 3, 2017, 9:01:03 PM4/3/17
to OMNeT++ Users
Hi Krishna,
I tried simulating 100 nodes on OLSR, but I had the same error. Can you tell how you've resolved it? 

Thanks in advance,
Sabari

Krishna Oza

unread,
Apr 3, 2017, 11:52:22 PM4/3/17
to OMNeT++ Users
Hi Sabari,

As Rudolf and other have already mentioned that signal SIGKILL is received by a process if it runs out of memory , seems that similar is the case with you. 

What worked for me was increasing the primary memory so that the simulation process have enough memory for simulating 100+ adhoc nodes running OLSR on it.

What you can try is first simulate with the 50% capacity of what you are simulating right now and observe the memory consumption and then gradually increase the number of nodes in the network observing the memory consumption of the machine (both primary and swap space if using Linux). 

By the way what machine configuration you are using right now. 

Do let us know if face any further issue or if things work fine for you.

Sabari Nathan

unread,
May 25, 2017, 11:52:45 AM5/25/17
to omn...@googlegroups.com
Hi Krishna,
Thanks for your valuable input. It did help me alot. I ramped up my RAM to 16GB and also changed my OS from 32 bit to 64 bit. As of now the simulations are running.

But the problem now is the simulation is very slow even in Express mode. It has taken 24 hours to simulate 24 seconds. Although the simulation time depends on the amount of traffic calls I set up, I do see that out of 8 cores of processors my system has, the simulation uses only one core. While the BATMAN protocol uses all 8 cores.
Any suggestions on this front? or Is this an implementation problem of OLSR in inet?

--
You received this message because you are subscribed to a topic in the Google Groups "OMNeT++ Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/omnetpp/eKmyPJn_XO8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to omnetpp+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/omnetpp.
For more options, visit https://groups.google.com/d/optout.

Michael Kirsche

unread,
May 29, 2017, 9:20:44 AM5/29/17
to OMNeT++ Users
The simulation run is always single-core.
As Rudolf nicely explained in this post (https://groups.google.com/d/msg/omnetpp/1-CX5SAxM64/2ZOVYrB2BQAJ), you can and should run multiple runs with (at least) different seeds to generate different random numbers to gain statistical confidence in your simulation results.
If your simulation runs too slow, you need either a faster CPU, switch to release mode and (most important) run your simulation on the command shell and try to optimize your simulation by reducing the complexity, limiting the logging etc.
To unsubscribe from this group and all its topics, send an email to omnetpp+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages