V4Ping Application does not report RTTs

58 views
Skip to first unread message

Jens Eirik

unread,
Jul 15, 2019, 8:25:50 AM7/15/19
to ns-3-users
Hello everyone!

I'm using ns3 to model a satellite network as a part of my bachelor's thesis. Towards that end, I implemented a satellite network module that allows to simulate traffic on such constellations of satellites and I now want to run simulations on these set-ups. I model satellites and ground stations, both implementing the ns3::Node functionality and connect them in some way via PointToPoint links.

Each ground station is covered by multiple satellites, so each ground station (node) has multiple net devices installed and exactly 1 interface per net device. Each interface is then assigned an IPv4 address within the ground station's own subnet.

Example: New York's subnet is 20.0.0.0/24 and London's subnet is 20.0.14.0/24My goal now is to report the RTTs between New York and Berlin periodically every second. To that end, I install a packet sink at both ground stations (this is used for different scenarios with more traffic, might this be the problem?) and a V4Ping application at New York's ground station. I always ping the first IPv4 address of a given ground station's subnet, i.e. 20.0.14.1 in this case for London. I use the tracing functions of ns3 as follows:

static void reportRTT(std::string context, Time rtt)
{
std::cout << context << "," << rtt << std::endl;
}
and in the main function of the simulation script, I connect the trace sink to the trace source as follows.
Config::Connect("/NodeList/*/ApplicationList/*/ns3::V4Ping/Rtt", MakeCallback(&reportRTT));

This mimicks the csma-ping.cc example, which returns the desired RTTs without problem. However, my simulation run does not produce any RTT output. I enabled PCAP tracing, so I can tell that the ICMP packets are actually being sent and being received, but at different interfaces. New York uses IPv4 address 20.0.0.6 (interface 6) to send the ping at time 0.0s and receives its response from 20.0.14.1 after 60.9 ms at interface 12. In my view, the packet would then be internally forwarded to its final destination, namely interface 6, at IPv4 address 20.0.0.6.

However, the reportRTT function is never called and so no RTTs are ever printed. There even exists connections  (e.g. New York to Lagos) where the ICMP packet is sent and its answer received at the same interface, but still no output is generated. 

Has someone run into similar issues or knows how one can measure the RTTs differently?

Thanks!

Tom Henderson

unread,
Jul 15, 2019, 9:20:54 AM7/15/19
to ns-3-...@googlegroups.com, Jens Eirik
On 7/15/19 5:25 AM, Jens Eirik wrote:
> Hello everyone!
>
> I'm using ns3 to model a satellite network as a part of my bachelor's
> thesis. Towards that end, I implemented a satellite network module that
> allows to simulate traffic on such constellations of satellites and I
> now want to run simulations on these set-ups. I model satellites and
> ground stations, both implementing the ns3::Node functionality and
> connect them in some way via PointToPoint links.
>
> Each ground station is covered by multiple satellites, so each ground
> station (node) has multiple net devices installed and exactly 1
> interface per net device. Each interface is then assigned an IPv4
> address within the ground station's own subnet.
>
> *Example:* New York's subnet is 20.0.0.0/24 and London's subnet is
From the description, my guess is that something is going wrong at the
IP layer rather than the application. You mention that in most cases,
you are sending a packet out of one interface but it returns on another
interface. I suggest to run your program with logging enabled and
inspect the logging output for clues. I would try first:

NS_LOG="Ipv4L3Protocol" ./waf --run your-program-name

and you may want to redirect the log output into a file to review it later:

... --run your-program-name > log.out 2>&1

- Tom

Jens Eirik

unread,
Jul 15, 2019, 10:06:23 AM7/15/19
to ns-3-users
Hey Tom,

First of all, thanks for the fast response. I will try this later. I do, however, not really understand your intuition concerning the IP layer. 
In another set of simulations, I ran multiple OnOffApplications at each ground station and sent to any other ground station over TCP, and this worked without any problem.

I might be wrong, but assume the following topology (two P2P links, the upper with delay of 10ms, the lower with 1s delay). Let the upper interface of N1 have IP 10.0.0.1  and the lower interface of  N2 have IP 10.0.0.2. Likewise for N2, 10.1.0.1, 10.1.0.2. Then, in my view, any TCP connection between 10.0.0.2 and 10.1.0.2 should still use the upper link due to the delay right? I mean when sending, the routing table in N1 should state that if I want to send to 10.1.0.2, I should use the upper link because it is shorter and thus the routing algorithm should prefer this route.

        _______________
      /                                \
N1                                     N2
      \ _______________ /

If that's the case, I don't really see why this seems to imply that something is going wrong at the IP layer.

Jens Eirik

Tom Henderson

unread,
Jul 15, 2019, 10:27:41 AM7/15/19
to ns-3-...@googlegroups.com, Jens Eirik


On 7/15/19 7:06 AM, Jens Eirik wrote:
> Hey Tom,
>
> First of all, thanks for the fast response. I will try this later. I do,
> however, not really understand your intuition concerning the IP layer.
> In another set of simulations, I ran multiple OnOffApplications at each
> ground station and sent to any other ground station over TCP, and this
> worked without any problem.

OK, I did not know this, but if you have isolated the problem to the
application in use, you can instead debug somewhere else. In general, I
selectively enable NS_LOG components until I narrow down where the
problem seems to be. It could also be that the trace sinks are not
connected but your config path with wildcards looks OK, and it was
copied from another example known to work.

I would personally try Ipv4L3Protocol and the ICMP-related classes first
to check that everything is as expected.

>
> I might be wrong, but assume the following topology (two P2P links, the
> upper with delay of 10ms, the lower with 1s delay). Let the upper
> interface of N1 have IP 10.0.0.1  and the lower interface of  N2 have IP
> 10.0.0.2. Likewise for N2, 10.1.0.1, 10.1.0.2. Then, in my view, any TCP
> connection between 10.0.0.2 and 10.1.0.2 should still use the upper link
> due to the delay right? I mean when sending, the routing table in N1
> should state that if I want to send to 10.1.0.2, I should use the upper
> link because it is shorter and thus the routing algorithm should prefer
> this route.
>
>         _______________
>       /                                \
> N1                                     N2
>       \ _______________ /
>
> If that's the case, I don't really see why this seems to imply that
> something is going wrong at the IP layer.

I am not sure from the description above (maybe there is a typo) but I
would expect that the upper link would have IP addresses from the same
subnet (10.0.0.1 and 10.0.0.2) and the lower link would have IP
addresses from a different subnet (10.1.0.1 and 10.1.0.2). If the N1
sender wants to reach N2's interface on the lower, longer delay link, it
may be able to use the upper link but it may instead use the lower link
because it has an interface on that subnet (i.e. it depends on the
routing protocol used).

- Tom

Jens Eirik

unread,
Jul 15, 2019, 11:38:21 AM7/15/19
to ns-3-users
It's not a typo, this is intended to model what I'm doing in my simulations. As you can probably imagine, such networks consists of thousands of satellites interconnected via laser-links.As such, in a current simulation there are roughly 1600 nodes and 4000 P2P-links. In order to be able to trace where packets go and identify them more easily, I've opted to assign IP-prefix 20.0.i.0/24 to the i-th ground station in the simulation. Satellite interfaces are of the i-th satelite are assigned the prefix 10.0.i.0/24.

Thus, routing workings entirely different and I use NixVectorRouting for performance reasons because the constellations are so huge.
Reply all
Reply to author
Forward
0 new messages