Help with simulation TCP model!

42 views
Skip to first unread message

Jorden Kerkhof

unread,
Jan 15, 2020, 9:02:57 AM1/15/20
to OMNeT++ Users
Dear reader,

For my graduation project I have to design network device hardware. Before I start creating a VHDL model for this, I first want to test and verify some parameters in a simplified network model. I have not much experience in networking and network simulators as well. I found this OMNET++ to be a modular network simulator which I think can create my system. Only I need some help with this, because I am not comfortable in programming in C++. The simplified system I want to simulate can be found in the attachment. I will explain the concept:

Node 1
Node 1 will have to act as an TCP-source which will sent an inifinite amount of data to Node 2 over an ethernet link. I want to control the following paramters:
  • TCP parameters:
    • Congestion control algorithm
    • Recovery algorithm
    • TCP-options (like SACK, Timestamp, Wscale)
    • Sending datarate
  • Ethernet link parameters
    • Bandwidth
    • Delay
Node 2
The second node will have 2 applications running.
  1. Application 1 will act as a TCP Sink and receive all the packets from node 2 and Acknowledge them. 
  2. Application 2 will send all packets received from node 1 to node 3. (Linktype doesn't matter) 
    Node 3
    The third node will have also 2 applications running:
    1. Application 1 will receive all the packets coming from node 2 and store them in a buffer
    2. Application 2 will act as an TCP source which will send all the data from the buffer to node 4 using TCP. For this TCP source I want to adjust the following parameters.
      • Recovery Algorithm
      • TCP-Options (SACK, Timestamp, Wscale)
      • Buffersize of node 3
    Node 4 
    This node will solely act as a TCP-sink, only receiving packets from node 3 and acknowledge them.

    As I said I am not very familiar in C++, I started to try to create a simpler design. I used the tcp-ClientServer as an example and wanted to adjust the algorithm. I set up a server and client, configured them as Tcpsession app and TCPsinkapp. You will find the .ned and .ini files in the attachment.

    When I set the tcpAlgorithmClass to TcpNewReno I expected the results of Congestion Window to have the typical "saw-tooth" shape which was not the case. What I got was the shape as can be seen in "tcpnewreno-cwnd.pdf". As you can see it only has a slow-start phase and not the congestion avoidance phase, as the total number of bytes sent are increasing exponentially. Maybe this is because the slow-treshold value is not reached yet. Also is the datarate set to 100 Mbps and the results in wireshark only show 2.7 Mbps, why is this? 
    In the "tcpnewreno-cwnd.pdf" you can see that no more than 7000 Bytes are "in flight", while the advertised window from the receiver is higher (7488), why is this? At the example the total-stack was set to 7MiB, but when I increased this value, nothing happened.

    Also when changing the "per" (probablity error rate), the algorithm will recover by doing a slow-start from a window of size 1(*mss) instead of halving the current windowsize as described in RFC2581.

    Is there somebody who can help me with this.  Or at least could explain what I am doing wrong or interpreting wrong in the part I already designed?
     
    simulation-overview.pdf
    diode.ned
    omnetpp.ini
    tcpnewreno-cwnd.pdf
    Reply all
    Reply to author
    Forward
    0 new messages