Dear colleagues,
I have a Java-based application-layer routing protocol developed by myself.
I successfully tested it with four Raspberry Pi's with IEEE 802.11n networks.
Now I need to run the same program with minimum code changes and found that Linux Containers (lxc) is a good option.
I copied the tap-bridge wifi example located in "ns-3.26/src/tap-bridge/examples/tap-wifi-virtual-machine.cc" and extended the number of nodes to 10, and also made each node equip two IEEE 802.11 wifi interfaces.
I also setup bridges and tap interfaces to connect linux containers so that one container has two virtual network interfaces for wifi.
And finally I had to conclude that ns-3 core is totally not scalable. After running the simulation, there was no problem if I just ping to someone in the network. But If I start my routing protocol and generate small "hello" packets every second, the ns-3 core consumes 97~110% of CPU. If I ping to the neighbor node 20m away, RTT becomes more than 20 seconds and continuously increases until I quit. So far, I think that it is not because all containers consume a bunch of computing resources but because the ns-3 core cannot run in parallel. Each container consumed less than 3% of the CPU.
I googled about parallel simulation, but all of them just talk about point-to-point links and even they don't consider linux containers.
My goal is to "use the Java-based routing protocol with many lines of codes and classes" and run simulations.
Now I find that "real-time" simulation with containers is almost impossible. But if I am wrong, please inform me with solutions!! I really need them.
Even I don't need such "real-time" simulations. It is truely OK if the clock can be synchronized with the simulation network, but I have no idea how to do that between the ns-3 simulated network and my Java program.
There is DCE for directly running user application codes, but it only accepts C++... Is there any other way to use that?
Any other suggestions are appreciated!
Best Regards,
-- Byoungheon Shin