[lwip-users] Simulation of TCP connections with multiple source IP addresses

999 views
Skip to first unread message

Leena Mokadam

unread,
Apr 30, 2007, 4:21:25 AM4/30/07
to lwip-...@nongnu.org

Hello,

I am a LWIP's new user. My query is about TCP connection with mutiple source
IPs.

Can we maintain (send/receive) multiple network interfaces using LWIP? I
have to simulate multiple IP addresses for the same PC and send/receive TCP
data with multiple IP address. For example,
My PC needs to simulate following IP addresses
192.168.20.120, 192.168.20.121, 192.168.20.122, 192.168.20.123,
192.168.20.124 and 192.168.20.125. And my program needs to maintain a TCP
connection for all of these IP address to various different servers. It
involves sending and receiving of data. Please let me know if this is
possible woith the help of LWIP. I am not sure as I have yet not seen any
LWIP API which accepts source IP address and Destination IP address.

Please help me.

Thanks in advance,
Leena M.
--
View this message in context: http://www.nabble.com/Simulation-of-TCP-connections-with-multiple-source-IP-addresses-tf3668458.html#a10249965
Sent from the lwip-users mailing list archive at Nabble.com.

_______________________________________________
lwip-users mailing list
lwip-...@nongnu.org
http://lists.nongnu.org/mailman/listinfo/lwip-users

Jonathan Larmour

unread,
May 3, 2007, 6:59:13 AM5/3/07
to Mailing list for lwIP users
Leena Mokadam wrote:
> Hello,
>
> I am a LWIP's new user. My query is about TCP connection with mutiple source
> IPs.
>
> Can we maintain (send/receive) multiple network interfaces using LWIP? I
> have to simulate multiple IP addresses for the same PC and send/receive TCP
> data with multiple IP address. For example,
> My PC needs to simulate following IP addresses
> 192.168.20.120, 192.168.20.121, 192.168.20.122, 192.168.20.123,
> 192.168.20.124 and 192.168.20.125. And my program needs to maintain a TCP
> connection for all of these IP address to various different servers. It
> involves sending and receiving of data. Please let me know if this is
> possible woith the help of LWIP. I am not sure as I have yet not seen any
> LWIP API which accepts source IP address and Destination IP address.

In the current code, lwIP's routing and interface structure is
intentionally simplified. You can have only one IP address for each
interface ("netif"). I _think_ lwip should allow you to do what you want
if you set up a netif for each IP address. When you need to send something
from a specific address, then you will need to call
bind()/netconn_bind()/tcp_bind() (depending on the interface you're using)
to set the local IP address to use for that connection.

If you need to set up a listening port, you will also need to have
multiple listeners - one bound to each IP address separately.

Jifl
--
eCosCentric Limited http://www.eCosCentric.com/ The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK. Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------ Opinions==mine

le...@spartanlabs.com

unread,
May 3, 2007, 7:03:54 AM5/3/07
to Mailing list for lwIP users
Hi Jonathan,

Thanks for the response. Can I have a look at some sample code which does
similar to what I am trying to do?

Thansk and Regards,
Leena M.

Jonathan Larmour

unread,
May 3, 2007, 8:39:08 AM5/3/07
to Mailing list for lwIP users
le...@spartanlabs.com wrote:
> Hi Jonathan,
>
> Thanks for the response. Can I have a look at some sample code which does
> similar to what I am trying to do?

Not that I know of. Although someone else might, in which case they would
probably need to know which API you are using.

Goldschmidt Simon

unread,
May 3, 2007, 9:01:17 AM5/3/07
to Mailing list for lwIP users
Hi,

> In the current code, lwIP's routing and interface structure
> is intentionally simplified. You can have only one IP address
> for each interface ("netif"). I _think_ lwip should allow you
> to do what you want if you set up a netif for each IP
> address. When you need to send something from a specific

That's what I thought, too. At least for the TX side, having an
ethernetif.c
that allows multiple instances (multiple calls to ethernetif_init() with
multiple 'struct netif's) but sends everything on one HW-interface only,
this should work.

On the RX side I'm not so sure. Maybe you have to call netif->init()
with the
right struct netif for the local IP address. Also maybe it works for any
netif,
I really don't know that, too. You would have to check that...

As for an example, for socket API client connection, I guess you would
do it
somehow like this:

>>>>>>>>>>CODE>>>>>>>>>>
struct sockaddr_in local_addr;
struct sockaddr_in server_addr;
int sock;
sock = lwip_socket(AF_INET, SOCK_STREAM, 0)

memset(&local_addr, 0, sizeof(struct sockaddr_in));
local_addr.sin_family = AF_INET;
local_addr.sin_port = 0; // note I don't really know if this works!
ret = inet_aton("192.168.20.120", &local_addr.sin_addr);
lwip_bind(sock, &local_addr, sizeof(local_addr));

memset(&server_addr, 0, sizeof(struct sockaddr_in));
server_addr.sin_family = AF_INET;
server_addr.sin_port = ntohs(SERVER_PORT);
ret = inet_aton("server ip address", &server_addr.sin_addr);
lwip_connect(sock, &server_addr, sizeof(server_addr));
<<<<<<<<<<CODE<<<<<<<<<<

For a socket API server connection, this would be somehow like this:

>>>>>>>>>>CODE>>>>>>>>>>
struct sockaddr_in local_addr;
struct sockaddr_in client_addr;
int sock_listen, sock_conn;
sock_listen = lwip_socket(AF_INET, SOCK_STREAM, 0)

memset(&local_addr, 0, sizeof(struct sockaddr_in));
local_addr.sin_family = AF_INET;
local_addr.sin_port = ntohs(SERVER_PORT);
ret = inet_aton("192.168.20.120", &local_addr.sin_addr);
lwip_bind(sock_listen, &local_addr, sizeof(local_addr));

lwip_listen(sock_listen, 1);

sock_conn = lwip_accept(sock_listen, &client_addr, sizeof(client_addr));
<<<<<<<<<<CODE<<<<<<<<<<


Note I have put this down from my mind, not tested it. Also, I have left
out all error checking.

Hope this helps, and tell us if it works or not, as I find this a pretty
interesting info about the stack ;-)

Simon

jcr...@xplornet.com

unread,
May 8, 2007, 9:41:46 AM5/8/07
to Mailing list for lwIP users

Some months ago I started a thread when I ran into problems with response times
using Microblaze and lwIP in sockets mode. I implemented the suggestions others
had suggested - hardware multiplier, implementing a cache etc with improved
performance but the underlying problem remained.

The application used lwip to transfer a small amount of data (requests really)
from the PC to the host which sends back 2Kbytes of data, divided into one full
1460 byte segment followed by the remaining 588 bytes in a partial segment. To
make the system user-responsive, I wanted to complete the request/response time
in, say, 100 millisecs. The problem arose when the PC client took some 200 -
300 milliseconds to ACK the first segment, as seen on Windump traces.

In tcp_out.c, I modified the tcp_enqueue() function to mark any full segment as
PSH and in tcp_output() modified the conditions in the entry to the 'while'
block to ensure that both segments were transmitted consecutively.

The complete transaction now takes place within 14 millisecs, which includes
some short delays I added to the client code, eg between the read and close
calls.

I am not sure why this modification was necessary - are there any settings in
either lwipopts or Winsock that I could have changed to avoid this. I think the
problem may lie more in the PC end - because in the original implementation the
first segment was sent without PSH and the client then maybe waited for some
timeout period before realising nothing more was coming, then sent the ACK at
which time the host responded sending the second partial segment that was
marked PSH.

Any thoughts or suggestions would be most welcome.

John Robbins.

Kieran Mansley

unread,
May 8, 2007, 10:06:41 AM5/8/07
to Mailing list for lwIP users
On Tue, 2007-05-08 at 10:41 -0300, jcr...@xplornet.com wrote:
> I am not sure why this modification was necessary - are there any settings in
> either lwipopts or Winsock that I could have changed to avoid this. I think the
> problem may lie more in the PC end - because in the original implementation the
> first segment was sent without PSH and the client then maybe waited for some
> timeout period before realising nothing more was coming, then sent the ACK at
> which time the host responded sending the second partial segment that was
> marked PSH.

Can you explain what modification you made to the while loop in
tcp_output() ? I'm guessing you allowed it to send the second segment
even if there was insufficient window space for it? It is this change
that is most likely getting the results you need.

You are correct that it is the windows end that is causing your problem.
Most TCP stacks have a "delayed ACK" policy. i.e. In certain conditions
they won't send an ACK in the hope that the connection involves bi-
directional traffic and the ACK will be able to piggyback on a returning
data packet. In your case there is no returning data packet, and so the
ACK isn't sent until a timer goes off after about 200ms. However, the
stack should acknowledge at least every other packet. This is why if
you modify lwIP to send both packets, they get ACKed immediately. The
windows end has received two packets and so must send an ACK straight
away.

Unfortunately, this is just a property of TCP, rather than either lwIP
or windows having a bug. I think linux has some clever stuff in it to
notice when the delayed ACK protocol would harm performance and so is
able to turn it off in cases like this.

To find out more, have a look through the TCP RFCs for "delayed ACK".
These are relatively well documented problems!

Kieran

Kieran Mansley

unread,
May 8, 2007, 2:50:05 PM5/8/07
to Mailing list for lwIP users
On Tue, 2007-05-08 at 15:06 +0100, Kieran Mansley wrote:

> Unfortunately, this is just a property of TCP, rather than either lwIP
> or windows having a bug. I think linux has some clever stuff in it to
> notice when the delayed ACK protocol would harm performance and so is
> able to turn it off in cases like this.

A colleague has pointed out that you may be able to avoid this problem,
if you're using the sockets API, by setting the TCP_NODELAY socket
option. This won't solve the issue in all cases, but may be good enough
for your needs.

Caglar Akyuz

unread,
May 8, 2007, 3:17:15 PM5/8/07
to Mailing list for lwIP users
Kieran Mansley wrote:
> On Tue, 2007-05-08 at 15:06 +0100, Kieran Mansley wrote:
>
>> Unfortunately, this is just a property of TCP, rather than either lwIP
>> or windows having a bug. I think linux has some clever stuff in it to
>> notice when the delayed ACK protocol would harm performance and so is
>> able to turn it off in cases like this.
>
> A colleague has pointed out that you may be able to avoid this problem,
> if you're using the sockets API, by setting the TCP_NODELAY socket
> option. This won't solve the issue in all cases, but may be good enough
> for your needs.

I have run into a similar problem and I solved the problem on LwIP side
forcing packets to be sent immediately using tcp_output(). On the PC
side(both Windows and Linux hosts) I tried setting TCP_NODELAY as you
have pointed, i.e. I disabled Naggle's algorithm. After a few months of
test, I realized that I'm facing some lost packets issue on the PC side.
When I removed disabling Naggle's algorithm part, I was able to manage
recovering packet loss rate a little bit. I'm not sure if it's true or
not, but I guess Windows and Linux stacks can show this kind of behavior.

Somebody please correct me if I'm wrong, as this problem really disturbs me.

Kieran Mansley

unread,
May 8, 2007, 3:20:01 PM5/8/07
to Mailing list for lwIP users
On Tue, 2007-05-08 at 22:17 +0300, Caglar Akyuz wrote:
> I realized that I'm facing some lost packets issue on the PC side.
> When I removed disabling Naggle's algorithm part, I was able to manage
> recovering packet loss rate a little bit. I'm not sure if it's true or
> not, but I guess Windows and Linux stacks can show this kind of behavior.

Hmm, those two issues should really not be connected!

Kieran

Caglar Akyuz

unread,
May 8, 2007, 3:30:07 PM5/8/07
to Mailing list for lwIP users
Kieran Mansley wrote:
> On Tue, 2007-05-08 at 22:17 +0300, Caglar Akyuz wrote:
>> I realized that I'm facing some lost packets issue on the PC side.
>> When I removed disabling Naggle's algorithm part, I was able to manage
>> recovering packet loss rate a little bit. I'm not sure if it's true or
>> not, but I guess Windows and Linux stacks can show this kind of behavior.
>
> Hmm, those two issues should really not be connected!

They should be, and from you wordings I understand that in fact they are.

Thanks for the reply.

Yusuf Caglar Akyuz

jcr...@xplornet.com

unread,
May 8, 2007, 4:24:40 PM5/8/07
to Mailing list for lwIP users
Hi Kieran,

Thanks for the fast response. The changes in tcp_out.c are

A)
//in tcp_enqueue() around line 260 after
TCPH_FLAGS_SET(seg->tcphdr, flags);
// added the following
if(seglen == pcb->mss)
TCPH_SET_FLAG(seg->tcphdr, TCP_PSH);

B)
In tcp_output(), in the while phrase that calls tcp_output_segment(seg, pcb) a
new condition was added (line 456 approx) to make sure that the second
(partial) seqment was sent consecutively, thus:

while (seg != NULL &&
( (ntohl(seg->tcphdr->seqno) - pcb->lastack + seg->len <= wnd)
|| (TCPH_FLAGS(seg->tcphdr) & TCP_PSH) ))

I think the window space on the PC was some 64K. I had considered padding out
the second segment to 1460 bytes to force an ACK but decided that was really
giving up the battle too easily.

This problem was for a while obscured by another problem in that the PC was
sending the FIN or FIN/ACK ACK too quickly for the Microblaze (server) end
which wound remain in the FIN_WAIT_2 state condition for several seconds with
various RSTs being exchanged until the segment was finally destroyed. I could
not get the logic worked out exactly as the server always reached FIN_WAIT_2
but acted almost as if it was still in FIN_WAIT_1. A very short delay was
enough to cure this problem. Timing throughout was observed using an ANT8 logic
analyser fed by a Microblaze FSL link with a time resolution of some 30
nanoseconds.

John.

Leena Mokadam

unread,
May 10, 2007, 4:37:05 AM5/10/07
to Mailing list for lwIP users
Hi Simon,

Thanks fro the response. I tried to write a simple client to send the data
to server. My application ended with a segmentation fault when tried to
create socket with lwip_socket() API. Could you please help in finding the
problem?

Thanks and Regards,
Leena M.

Goldschmidt Simon

unread,
May 10, 2007, 4:52:48 AM5/10/07
to Mailing list for lwIP users

> Thanks fro the response. I tried to write a simple client to
> send the data to server. My application ended with a
> segmentation fault when tried to create socket with
> lwip_socket() API. Could you please help in finding the problem?

Hmm, which OS & lwIP port are you running? What kind of application did
you run?
Maybe you can test some of the contrib apps first (if you are running it
on linux)?
You said you have to simulate something on your PC, if you run it on
windows,
I don't think there are socket example applications for windows in the
contrib module.
"contrib/ports/unix/proj/unixsim/simhost.c" includes a ping test which
uses
raw sokets which you could try (if running linux).

Kieran Mansley

unread,
May 10, 2007, 12:16:25 PM5/10/07
to Mailing list for lwIP users
On Thu, 2007-05-10 at 14:07 +0530, Leena Mokadam wrote:
> Hi Simon,
>
> Thanks fro the response. I tried to write a simple client to send the data
> to server. My application ended with a segmentation fault when tried to
> create socket with lwip_socket() API. Could you please help in finding the
> problem?

Is this a simple client using just one address, or are you already
trying to do multiple addresses? I'd start simple with just one network
address configured, and get that working, then work your way up.

Kieran

jcr...@xplornet.com

unread,
May 21, 2007, 9:40:00 AM5/21/07
to Mailing list for lwIP users
Hi Kieran,

I looked into the use of TCP_NODELAY. If this option is set, the result is to
add TF_NODELAY to tcp->flags.

In api_msg.c in do_write() this flag is used as shown in the code fragment
below.

case NETCONN_TCP:
err = tcp_write(msg->conn->pcb.tcp, msg->msg.w.dataptr,
msg->msg.w.len, msg->msg.w.copy);
/* This is the Nagle algorithm: inhibit the sending of new TCP
segments when new outgoing data arrives from the user if any
previously transmitted data on the connection remains
unacknowledged. */
if(err == ERR_OK && (msg->conn->pcb.tcp->unacked == NULL || (msg->conn-
>pcb.tcp->flags & TF_NODELAY)) ) {
tcp_output(msg->conn->pcb.tcp);

In my case tcp->unacked is always NULL (since I am only sending the one set of
data) so the other condition is redundant. The function tcp_output() would
normally only send out only the one segment (the first segment which is full)
since the pcb->lastack will not get updated until the client responds and
updates this field

while (seg != NULL &&

( (ntohl(seg->tcphdr->seqno) - pcb->lastack + seg->len <= wnd) )

In my version the second segment is sent without delay since the flags are now
set to include TCP_PSH.

while (seg != NULL &&
( (ntohl(seg->tcphdr->seqno) - pcb->lastack + seg->len <= wnd)
|| (TCPH_FLAGS(seg->tcphdr) & TCP_PSH) ))

I am not completely sure of my interpretation of the sequence of events and
would welcome any corrections.

John Robbins.

Kieran Mansley

unread,
May 21, 2007, 10:09:32 AM5/21/07
to Mailing list for lwIP users
On Mon, 2007-05-21 at 10:40 -0300, jcr...@xplornet.com wrote:
> Hi Kieran,
>
> I looked into the use of TCP_NODELAY. If this option is set, the result is to
> add TF_NODELAY to tcp->flags.
>
> In api_msg.c in do_write() this flag is used as shown in the code fragment
> below.
>
> case NETCONN_TCP:
> err = tcp_write(msg->conn->pcb.tcp, msg->msg.w.dataptr,
> msg->msg.w.len, msg->msg.w.copy);
> /* This is the Nagle algorithm: inhibit the sending of new TCP
> segments when new outgoing data arrives from the user if any
> previously transmitted data on the connection remains
> unacknowledged. */
> if(err == ERR_OK && (msg->conn->pcb.tcp->unacked == NULL || (msg->conn-
> >pcb.tcp->flags & TF_NODELAY)) ) {
> tcp_output(msg->conn->pcb.tcp);
>
> In my case tcp->unacked is always NULL (since I am only sending the one set of
> data) so the other condition is redundant.

Yes, I agree. It was a colleague who suggested it and I was sceptical
that it was Nagle that was causing your particular problem, but I
thought it worth passing on.

> The function tcp_output() would
> normally only send out only the one segment (the first segment which is full)
> since the pcb->lastack will not get updated until the client responds and
> updates this field
>
> while (seg != NULL &&
> ( (ntohl(seg->tcphdr->seqno) - pcb->lastack + seg->len <= wnd) )

That's not quite true. It will send as many segments as it can without
exceeding "wnd" bytes past pcb->last_ack. This could be many segments,
not just one.

> In my version the second segment is sent without delay since the flags are now
> set to include TCP_PSH.
>
> while (seg != NULL &&
> ( (ntohl(seg->tcphdr->seqno) - pcb->lastack + seg->len <= wnd)
> || (TCPH_FLAGS(seg->tcphdr) & TCP_PSH) ))

The problem here is that if the PSH bit is set you're sending segments
even if you exceed wnd bytes past lastack. This violates TCP's spec:
you're only allowed to send a certain number of bytes to a receiver
(that's what the window is for) until the receiver says "OK, send me
some more". At the start of a connection the wnd value might be small,
but it should quickly increase so you can send many packets in one go
rather than just one at a time. By ignoring this, you're bypassing the
mechanism that TCP uses to try and avoid packet loss, and so while it
sounds like it's helpful to you in your particular circumstance, it's
not a safe change to make.

Hope that helps you understand what's what.

jcr...@xplornet.com

unread,
May 21, 2007, 11:19:43 AM5/21/07
to Mailing list for lwIP users
Thanks for the fast response.

I will bear your comments in mind. In the current case where I have only the
two segments (one full and one partial) in a given transaction, I don't think I
can exceed the client window size. I could add a further condition that the
proposed transmission of x number of segments wlll not exceed the window limit.
I think I could do this in tcp_write().

The problem, it seems to me, really lies in the overhead that http makes to
ensure safe communications. It is not really suited to my application - UDP
would be better if instrument and computer were always on the same network but
this of course is rarely the case.

I looked briefly into RFC1644, T/TCP, that seemed to attack some of the issues
in this very assymetrical type of communication. There did not seem to be much
in the way of support and a lot of messages about potential security(?) risks
which I don't think would be an issue in our case.

John.

Reply all
Reply to author
Forward
0 new messages