Increase resources for OMNeT++ simulation

1,187 views
Skip to first unread message

Alex

unread,
Mar 18, 2013, 12:33:58 PM3/18/13
to omn...@googlegroups.com
Dear all

Is there a possibility to increase the resources that are used by an OMNeT++ simulation?
What my implementation does: some caching and message exchange between all nodes that see each other.

Problem:
For example, if I simulate networks larger than 25 nodes with a playground sizes larger than 200m x 200m, I cannot collect the information of all nodes in the finish function of the module. The information is stored in variables and written to files at the end of the simulation in finish(). If I have "too many" nodes, no information is stored in the variables. I do not use cOutVector and cLongHistogram because it is more convenient for me to process the information stored in files. I guess the amount of information that is processed during my simulation reaches a certain limit.

Therefore, my question: Is there a possibility to increase that limit? Any hint is greatly appreciated.

Best regards,
Alex

Alex

unread,
Mar 18, 2013, 2:52:01 PM3/18/13
to omn...@googlegroups.com
When I show the heap status in the IDE during the commandline runs, the heap size is always well below the maximum size.

heap size: 80 - 128M of total: 180M, max: 495M mark: 77M

However, the simulations with more nodes take considerably more time, because more events are generated. Is there a limit of events that can be processed per second? Is there a possibility to increase it by decreasing the simulation speed?



Sample Output of Unsuccessful simulation (30 nodes):
*******************************************************************************************************
Running simulation...
** Event #1   T=0   Elapsed: 0.000s (0m 00s)  0% completed
     Speed:     ev/sec=0   simsec/sec=0   ev/simsec=0
     Messages:  created: 480   present: 480   in FES: 210
** Event #235776   T=163.618126245857   Elapsed: 2.000s (0m 02s)  13% completed
     Speed:     ev/sec=117888   simsec/sec=81.8091   ev/simsec=1441.01
     Messages:  created: 156026   present: 5779   in FES: 2433
** Event #474880   T=413.567659960493   Elapsed: 4.001s (0m 04s)  34% completed
     Speed:     ev/sec=119552   simsec/sec=124.975   ev/simsec=956.609
     Messages:  created: 277389   present: 3590   in FES: 1051
** Event #710144   T=617.250066240814   Elapsed: 6.002s (0m 06s)  51% completed
     Speed:     ev/sec=117573   simsec/sec=101.79   ev/simsec=1155.05
     Messages:  created: 414097   present: 4336   in FES: 1200
** Event #906496   T=922.208787312804   Elapsed: 8.003s (0m 08s)  76% completed
     Speed:     ev/sec=98176   simsec/sec=152.479   ev/simsec=643.864
     Messages:  created: 464452   present: 3180   in FES: 541
** Event #1076152   T=1200   Elapsed: 9.219s (0m 09s)  100% completed
     Speed:     ev/sec=139519   simsec/sec=228.447   ev/simsec=610.728
     Messages:  created: 505526   present: 2476   in FES: 116

<!> Simulation time limit reached -- simulation stopped at event #1076152, t=1200.
*******************************************************************************************************



Sample Output of Successful simulation (20 nodes):
*******************************************************************************************************
Running simulation...
** Event #1   T=0   Elapsed: 0.000s (0m 00s)  0% completed
     Speed:     ev/sec=0   simsec/sec=0   ev/simsec=0
     Messages:  created: 320   present: 320   in FES: 140
** Event #252928   T=311.112862272101   Elapsed: 2.001s (0m 02s)  25% completed
     Speed:     ev/sec=126401   simsec/sec=155.479   ev/simsec=812.978
     Messages:  created: 154506   present: 4696   in FES: 1717
** Event #516864   T=608   Elapsed: 4.001s (0m 04s)  50% completed
     Speed:     ev/sec=131968   simsec/sec=148.444   ev/simsec=889.011
     Messages:  created: 294852   present: 5649   in FES: 479
** Event #637932   T=1200   Elapsed: 4.795s (0m 04s)  100% completed
     Speed:     ev/sec=152670   simsec/sec=746.532   ev/simsec=204.505
     Messages:  created: 297112   present: 4830   in FES: 60

<!> Simulation time limit reached -- simulation stopped at event #637932, t=1200.
*******************************************************************************************************



Alex

unread,
Mar 19, 2013, 5:03:28 AM3/19/13
to omn...@googlegroups.com
The caching is performed in std::map's. I used it as replacements to hash tables because I did not have any implementation on it.
Anyways, when I look at the actual size of the map's at the end of the simulation, it is by far below the max_size.
For example, the maximum_size is 178956970 and the actual size is 60. Thus, I don't think that this is the problem.

However, there are many messages sent around (with IP multicast). Therefore, there are many events.
--> Is there a limit of messages that can be processed concurrently and if so, is it known how large it is approximately?

Since I send multicast messages, the IGMP protocol is used.
--> Can this be the problem in terms of scalability?

I would appreciate any hints.

Best regards,
Alex

Alex Berger

unread,
Mar 19, 2013, 8:53:36 AM3/19/13
to omn...@googlegroups.com
I assume that I create too many self-msgs (timers for message expiration).

One question regarding this:
If more messages are scheduled that can be handled by the event queue, will they be silently dropped or will there be an error message?

Rudolf Hornig

unread,
Mar 19, 2013, 9:26:51 AM3/19/13
to omn...@googlegroups.com
There is NO limit on the event queue. Of course you can run out of memory, but that is a differenet matter.

Alex Berger

unread,
Mar 19, 2013, 2:49:56 PM3/19/13
to omn...@googlegroups.com
Dear Rudolf Hornig

Thank you for the reply. I still have three questions. You were right, the heap memory was a bit low. I increased the values in ide/omnetpp.ini to

-vmargs
-Xms1024m
-Xmx1536m
-XX:MaxPermSize=1024m


but without much effect. In the heap status it shows that more heap memory is available (heap size: 264M, total 990M, max 1485M, mark: <none>) but still I cannot simulate larger networks with the same content numbers. I think I may install the full Linux Tool Project to use valgrind's massif heap profiler. I guess with heap status, I cannot see peak values which are only reached for a short time.
Since I have 8GB RAM, I may also try to run the 64bit-JVM.
1. Do you know whether there are any issues with 64bit Linux distributions? I remember having read about some issues in the past.

In my simulations, there are sometimes really a lot of messages present (>30k) and FES (>10k).

2. Do you really think that this can be handled by omnetpp?
I think the values are so high because I store a dup() of every message in the cache until a timer is up (self-msg) and the message is deleted.
I may be able to reduce the FES by avoiding seperate timers for every received data packet and periodically check if something timed out.

3. If I would store the information of received packets in a separate data structure instead of the entire dup-packet, would that help much? I guess these dup-messages are counted as "present" until I delete them. Is that right?

Thanks,
Alex

Rudolf Hornig

unread,
Mar 20, 2013, 6:39:22 AM3/20/13
to omn...@googlegroups.com


On Tuesday, 19 March 2013 19:49:56 UTC+1, Alex Berger wrote:
Dear Rudolf Hornig

Thank you for the reply. I still have three questions. You were right, the heap memory was a bit low. I increased the values in ide/omnetpp.ini to

-vmargs
-Xms1024m
-Xmx1536m
-XX:MaxPermSize=1024m


but without much effect. In the heap status it shows that more heap memory is available (heap size: 264M, total 990M, max 1485M, mark: <none>) but still I cannot simulate larger networks with the same content numbers. I think I may install the full Linux Tool Project to use valgrind's massif heap profiler. I guess with heap status, I cannot see peak values which are only reached for a short time.
Since I have 8GB RAM, I may also try to run the 64bit-JVM.
1. Do you know whether there are any issues with 64bit Linux distributions? I remember having read about some issues in the past.

There is a bit of confusion here! You are changing the heap size of the java VM that is used by the IDE. The IDE has NOTHING to do with the simulation process. Forget about JVM, 64-bitnes, java heap size and everything. It has no effect on the process that is running your simulation (which is a program written in C++ by the way).
 

In my simulations, there are sometimes really a lot of messages present (>30k) and FES (>10k).

2. Do you really think that this can be handled by omnetpp?
I think the values are so high because I store a dup() of every message in the cache until a timer is up (self-msg) and the message is deleted.
I may be able to reduce the FES by avoiding seperate timers for every received data packet and periodically check if something timed out.
yes. Possible it could be slow, but should not be a memory related problem. 

3. If I would store the information of received packets in a separate data structure instead of the entire dup-packet, would that help much? I guess these dup-messages are counted as "present" until I delete them. Is that right?

my feeling is that you have some other kind of bug in your simulation that has nothing to do with the whole thread at all.

You wrote: "If I have "too many" nodes, no information is stored in the variables."

You should debug your code and figure out why the information is not stored in the variables. Resource limits have nothing to do with this.

Alex Berger

unread,
Mar 21, 2013, 1:10:07 PM3/21/13
to omn...@googlegroups.com
Hi Rudolf Hornig

Thanks for the reply.


On Wednesday, March 20, 2013 11:39:22 AM UTC+1, Rudolf Hornig wrote:

There is a bit of confusion here! You are changing the heap size of the java VM that is used by the IDE. The IDE has NOTHING to do with the simulation process. Forget about JVM, 64-bitnes, java heap size and everything. It has no effect on the process that is running your simulation (which is a program written in C++ by the way).

You are right, I was confusing things. If there would be a memory related problem would there be an error? The RAM usage during the program execution does not increase much during normal execution although there are many events, so I guess I need to get Valgrind's heap profiler running.
 
my feeling is that you have some other kind of bug in your simulation that has nothing to do with the whole thread at all.
That's possible. I did debug the code with a few nodes and everything was fine and working as it supposed to be.
Valgrind shows some problems in code but not directly related with my code but some modules that I use.
For example: Use of unitialized value in cModule::callFinish and some memory leaks in e.g. in RoutingTable::initialize(), RoutingTable::configureInterfaceForIPv4, IPv4NetworkConfigurator::optimizeRoutes etc. Some of the modules seem pretty basic. I guess, I have to look at those next.

Best regards,
Alex


Reply all
Reply to author
Forward
0 new messages