Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

tcpclient in .net 2.0

2 views
Skip to first unread message

Keith Langer

unread,
Jul 6, 2007, 2:55:08 PM7/6/07
to
Maybe someone has come across this situation and has a way to handle
it.

If I have a tcpClient object and do a GetStream.Read on it after
setting a ReceiveTimeout, I'm getting different behavior in .Net 2.0
than in 1.0 when the method returns 0 bytes (which signifies a
timeout). In both cases I get a System.IO.IOException , but in the
2.0 framework the socket also disconnects when this occurs (in 1.0 it
did not disconnect). They have made some improvements in the
tcpClient class, such as exposing the underlying socket, so I'd prefer
to keep using tcpClients. Any idea how I can prevent this
disconnect? Right now I'm checking DataAvailable and using
Thread.Sleep as a workaround, but I'd prefer to do a blocking read
call.


Thanks,
Keith

Peter Duniho

unread,
Jul 6, 2007, 3:57:30 PM7/6/07
to
On Fri, 06 Jul 2007 11:55:08 -0700, Keith Langer <tana...@aol.com> wrote:

> [...] In both cases I get a System.IO.IOException , but in the


> 2.0 framework the socket also disconnects when this occurs (in 1.0 it
> did not disconnect). They have made some improvements in the
> tcpClient class, such as exposing the underlying socket, so I'd prefer
> to keep using tcpClients. Any idea how I can prevent this
> disconnect?

I don't know how they implement the timeout, but assuming they use the
underlying socket timeout, then there's a good reason for disconnecting
the socket after a timeout, as the socket is in an indeterminate state at
that point.

So, it's likely that this change in behavior is actually a usability
bug-fix, so that inexperienced socket programmers don't go trying to
continue to use a socket on which a timeout has occurred.

Note that this receive timeout is different from using a timeout in other
methods (e.g. Socket.Select()). You can implement a timeout in a variety
of other ways that don't invalidate the socket. But the ReceiveTimeout
property likely sets the socket's receive timeout option directly
(setsockopt(...SO_RCVTIMEO..)), and that's what will invalidate the socket.

Pete

0 new messages