LTE: An Assertion Failure occurs from a packet size of 1473 Byte

147 views
Skip to first unread message

Peter Riedemann

unread,
Jan 21, 2016, 6:25:19 AM1/21/16
to ns-3-users
Hi ns3 experts,

it's me again. I've implemented a LTE Szenario. Five User Equipments (UE | in the example program called homeNodes) are defined and are attached to an eNodeB. Seven nomal Nodes (SN | in the example programm called serverNodes) are connected with the PGW Node via PointToPoint Connection (it is possibe to change the number of SN from 1 to a big numter if you want). On one SN, a UDP Echo Server is installed. On one UE and on the other SN the corresponding UDP Echo Clients are installed. Everything works fine as long as the packet size is lower than 1473 Byte. Once I change the packet size to 1473 Byte or more, the following Assertion Failure occurs and the programm terminates immediately:

...
At time 2s client sent 1473 bytes to 176.0.0.2 port 8080
At time 2.01494s server received 1473 bytes from 7.0.0.6 port 49153
At time 2.01494s server sent 1473 bytes to 7.0.0.6 port 49153
assert failed. cond="m_current >= m_dataStart && m_current <= m_dataEnd", msg="You have attempted to read beyond the bounds of the available buffer space. This usually indicates that a Header::Deserialize or Trailer::Deserialize method is trying to read data which was not written by a Header::Serialize or Trailer::Serialize method. In short: check the code of your Serialize and Deserialize methods.", file=./ns3/buffer.h, line=1009
terminate called without an active exception

I've attached the example code. Just copy the example into the scratch folder. Can you tell me what I've to change that the program runs correctly also with a packet size greater than 1472 Byte?

Thanks,
Peter

Konstantinos

unread,
Jan 21, 2016, 6:36:52 AM1/21/16
to ns-3-users
Hi Peter,

Your file was not attached, so I can not test it.
However, you have asked the same question recently and you got an answer from Tommaso.

Have you applied the solution reported there? Perhaps then that solution is not the optimal...
Can you try to upload the file again? 
It would also help if you debug the scenario to find which part of the code sends this assertion (i.e. which packet/header it tried to be read).

K.

Peter Riedemann

unread,
Jan 21, 2016, 7:04:02 AM1/21/16
to ns-3-users
Hi,
Sorry. Here is the file.

Regards,
Peter
myTestFile.cc

Peter Riedemann

unread,
Jan 21, 2016, 7:09:20 AM1/21/16
to ns-3-users
Yes, I noticed that he answers me. But in this case I don't use a NixVector for routing. So there might be another mistake for my part.


Regards,
Peter

Am Donnerstag, 21. Januar 2016 12:36:52 UTC+1 schrieb Konstantinos:

Konstantinos

unread,
Jan 21, 2016, 7:24:45 AM1/21/16
to ns-3-users, manuel....@cttc.es, bboj...@cttc.es
There are files missing

//#include "ns3/ClusterConfig.h"

//#include "ns3/MyConfig.h"

//#include "ns3/MySingleton.h"

//#include "ns3/Scenario.h"

//#include "ns3/DataBasis.h"

//#include "ns3/Helper.h"

//#include "ns3/Network_Utilities.h"


I managed to resolve that and build the scenario.


The offending part of the code is in the EpcTftClassifier::Classify (Ptr<Packet> p, EpcTft::Direction direction)

when it tries to remove the UdpHeader. I guess that the issue is similar to the Nix vector when the packet gets fragmented the Udp header is not kept. 


100  if (protocol == UdpL4Protocol::PROT_NUMBER)
101  {
102  UdpHeader udpHeader;
103  pCopy->RemoveHeader (udpHeader);

For that reason I have included Manuel Bilijana (the LTE maintainers) in the loop.

Tommaso Pecorella

unread,
Jan 21, 2016, 9:11:44 AM1/21/16
to ns-3-users, manuel....@cttc.es, bboj...@cttc.es
Well, if you try to get an header from a fragment... you'll have big problems.

One trick is to use a ByteTag to store the needed info (e.g., ports, protocol number, etc.) but it's so wrong that I can't explain how wrong it is. I mean, you could use it only for ideal things, that can't care less about how it could be implemented for real and that you don't plan even remotely to translate to something to be made for real.

The other option (but it's a pain as well) is to use the fragment identifier to retrieve the data. I.e., you see the first fragment (with the UDP header) and you classify it, but you keep the classification data and you apply them blindly to any other packet with the same fragmentation ID. There's a catch tho... you could have fragments from different packets (but with the same ID) overlapping. This is a known issue in high bandwidth networks, as the number of in-flight packets can exceed the ID space.

Anyway, you'll have to find a way to not rely on the presence of UDP headers, as they could be not there (and this is normal).

Cheers,

T.

Peter Riedemann

unread,
Jan 21, 2016, 10:46:07 AM1/21/16
to ns-3-users, manuel....@cttc.es, bboj...@cttc.es
Thank you for your answers. I'm so sorry that I haven't checked the example file outlying my project. I've attached the file again with the necessary modifications.

@Tommaso Pecorella: I'm not shure if I understand you right. In my code I don't try to get a header from a fragment. I define the packet size on application level and starts the applications (server / client). On the topology level I define the nodes, the connections between them and the routing information. So the fragmentation makes ns3 and not me once the packet size is bigger than 1472 Byte, isn't it? And that means the lte module has to handle such fragments, too. Can you please explain the problem in other words, again. I don't really understand why I can't use a packet size bigger than 1472 Byte.

Thanks for your patience.
Peter
myTestFile.cc

Manuel Requena

unread,
Jan 21, 2016, 12:34:35 PM1/21/16
to ns-3-...@googlegroups.com, Manuel (cttc), bboj...@cttc.es
Hi Peter and all,

The problem is in the PGW when it tries to classify the downlink packets (in the EpcTftClassifier::Classify method) to tunnel the user IP packet into the appropiate EPS bearer. Currently, this method tries to decode the complete IP+UDP headers so when it uses an uncomplete IP segment and not a complete IP packet, it crashes because the UDP header is not present.

Currently, the TFT classifier uses all the packet filter fields (address, port and tos). It should be more flexible to allow optional fields.

If you don't use EPS bearers (and just the default bearer), the UDP header should not be decoded. This seems to be your case (according to the example program you sent).

If you use EPS bearers that use port number in the packet filter and you have IP segmentation in your network, then a smart algorithm should be implemented in the PGW (TFT classifier) to keep track of the IP segments (as the ones proposed by Tommaso).

Meanwhile, to avoid the IP segmentation, you can increase the MTU size of the p2p link between the PGW and your servers.

Best regards,
Manuel



--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
To post to this group, send email to ns-3-...@googlegroups.com.
Visit this group at https://groups.google.com/group/ns-3-users.
For more options, visit https://groups.google.com/d/optout.

Tommaso Pecorella

unread,
Jan 21, 2016, 6:26:14 PM1/21/16
to ns-3-users, manuel....@cttc.es, bboj...@cttc.es
Hi all,

thanks Manuel for the description, very clear.


Cheers,

T.

Peter Riedemann

unread,
Jan 25, 2016, 6:11:44 AM1/25/16
to ns-3-users, manuel....@cttc.es, bboj...@cttc.es
Thank you very much for your detailed description. Now I've understood it.
Okay. So I changed the Mtu size to a great value. This works currently for me. But in this case the simulation results don't consider the Ip segmentation. So I've added an EPS Bearer to the LteHelper and activated it. Here is the corresponding code:

    // Activate an EPS bearer with EpcTft Packet Filter
   
Ptr<EpcTft> tft = Create<EpcTft>();
   
EpcTft::PacketFilter pf;
    pf
.localPortStart = 8080;
    pf
.localPortEnd = 8080;
    tft
->Add(pf);

   
enum EpsBearer::Qci q = EpsBearer::NGBR_VIDEO_TCP_OPERATOR;
   
EpsBearer bearer(q);
    lteHelper
->ActivateDedicatedEpsBearer(homeDevices, bearer, tft);

I add this code snippet after attaching the UE's to an eNodeB because the function lteHelper->Attach(UeDevice,enbDevice) activates the Default Bearer -- like described in the documentation -- and I don't want to use the Default EPS Bearer. But this don't work for me. Do I need to implement my own EPS Bearer and the decoding of the UDP Header?

Thanks,
Peter


Tommaso Pecorella

unread,
Feb 11, 2016, 5:08:45 PM2/11/16
to ns-3-users, manuel....@cttc.es, bboj...@cttc.es
Hi Peter,

sorry but I forgot to reply.
Yes, I guess you'll need to write your own EPS bearer classifier. I'd get in touch with the LTE maintainer to fix the bug for good.
Note that by avoiding the fragmentation, the "normal" EPS bearer should classify your packets correctly.

Cheers,

T.
Reply all
Reply to author
Forward
0 new messages