I've written a server application using the IO completion port model in C++
and call
GetQueuedCompletionStatus() to determine when and which overlapped IO
requests complete.
I assume that if a WSASend() or WSARecv() call succeeds (ie. returns 0 or
SOCKET_ERROR with IO_PENDING status) I will *always* see a corresponding
event from GetQueuedCompletionStatus() with a pointer to the LPOVERLAPPED
structure passed to WSASend/Recv().
Is this correct?
If I close the socket while overlapped I/O is pending on that socket, will
I still get event notification for each IO with pointers to my LPOVERLAPPED
structures?
The reason I ask is that I think I am missing events when a socket is closed
prematurely.
Since I may have several overlapped read/write operations pennding at one
time, I dynamically allocate my OVERLAPPED objects before calling
WSASend/WSARecv and delete them when events are picked up by
GetQueuedCompletionStatus().
Unless events are guaranteed for a successful WSASend/WSARecv, how can I
know when to delete my OVERLAPPED objects?
The few examples I've seen using IO completion ports are roo trivial to be
of much help.
Thanks for any help.
-Randy
> I've written a server application using the IO completion port model
> in C++ and call
> GetQueuedCompletionStatus() to determine when and which overlapped IO
> requests complete.
>
> I assume that if a WSASend() or WSARecv() call succeeds (ie. returns
> 0 or SOCKET_ERROR with IO_PENDING status) I will *always* see a
> corresponding event from GetQueuedCompletionStatus() with a pointer
> to the LPOVERLAPPED structure passed to WSASend/Recv().
>
> Is this correct?
Yes.
> If I close the socket while overlapped I/O is pending on that
> socket, will I still get event notification for each IO with pointers
> to my LPOVERLAPPED structures?
You must not release a shared resource while another thread is or might
be using it. DO NOT DO THIS. Use 'shutdown' for TCP sockets.
> The reason I ask is that I think I am missing events when a socket is
> closed prematurely.
There are no events on a closed socket. When you close a socket, the
socket handle no longer exists so it's impossible for your application to
receive an event for it.
> Since I may have several overlapped read/write operations pennding at
> one time, I dynamically allocate my OVERLAPPED objects before calling
> WSASend/WSARecv and delete them when events are picked up by
> GetQueuedCompletionStatus().
>
> Unless events are guaranteed for a successful WSASend/WSARecv, how
> can I know when to delete my OVERLAPPED objects?
How do you do this? How do you prevent, for example, the call to
'closesocket' from occuring right before a call to 'WSASend' (in which case
you could send on the wrong connection!)
Keep count of all pending operations. Do not close a socket handle until
all the operations are safely completed or aborted. Use 'shutdown' for TCP
sockets. Do not close an active socket.
DS
>RJGraham wrote:
...
>> Unless events are guaranteed for a successful WSASend/WSARecv, how
>> can I know when to delete my OVERLAPPED objects?
>
> How do you do this? How do you prevent, for example, the call to
>'closesocket' from occuring right before a call to 'WSASend' (in which case
>you could send on the wrong connection!)
>
> Keep count of all pending operations. Do not close a socket handle until
>all the operations are safely completed or aborted. Use 'shutdown' for TCP
>sockets. Do not close an active socket.
<2 cents>
RJ,
Using overlapped I/O and coding the logic to do a proper shutdown just
took me many tries before I got it right.
</2 cents>
With completion ports, the pending operations come back with a
WSA_OPERATION_ABORTED for a CancelIo() or DisconnectEx(). I don't know if
this helps you.
--
David Gravereaux <davy...@pobox.com>
[species: human; planet: earth,milkyway(western spiral arm),alpha sector]
Yes.
> If I close the socket while overlapped I/O is pending on that socket,
will
> I still get event notification for each IO with pointers to my
LPOVERLAPPED
> structures?
Yes. You should set a flag indicating that closesocket was called:
CPerSocket::Close( ... )
{
// Mutex lock
if ( m_hSck != INVALID_SOCKET )
{
closesocket( m_hSck, ... );
m_hSck = INVALID_SOCKET;
}
// Mutex unlock
}
CPerSocket::UseSocket( ... )
{
// Mutex lock
if ( m_hSck != INVALID_SOCKET )
{
// Use
}
// Mutex unlock
}
This will absolutely ensure that once a socket is closed, no other thread
with active references to the closed socket will touch it.
However, this is not graceful at all. Here is an outline a graceful shutdown
algo:
http://groups.google.com/groups?selm=rr_Ca.1115481%24S_4.1149439%40rwcrnsc53&rnum=2
> The reason I ask is that I think I am missing events when a socket is
closed
> prematurely.
You will not miss any. If you do, your doing something special....
;)
> Since I may have several overlapped read/write operations pennding at one
> time, I dynamically allocate my OVERLAPPED objects before calling
> WSASend/WSARecv and delete them when events are picked up by
> GetQueuedCompletionStatus().
Frequent allocations will kill performance! Keep a cache of overlapped
pointers in the per-socket, pop and push as needed.
> Unless events are guaranteed for a successful WSASend/WSARecv, how can I
> know when to delete my OVERLAPPED objects?
Once GQCS dequeues, you know that the system is done with the attached
overlapped struct, and you can free it.
P.S.
All of this can be easily solved by simply reference counting your socket.
// Producer
InterlockedIncrementAcquire( &Socket.Refs );
Make overlapped call
// Consumer
GQCS dequeues
InterlockedIncrementRelease( &Socket.Refs );
No just keep watch for the drop to zero condition, and your free to free...
:)
shutdown does not cause pending WSARecv's to complete.
However, the client would receive the shutdown, and closesocket causing
WSARecvs on the server to complete...
For a TCP socket, a shutdown(SD_BOTH) will not cause pending WSARecvs to
complete even though a non-blocking receive would return zero?
DS
DOOHHH!!!!!!!!!!!
Use Acquire for consumer, and Release for producer!
Sorry!
:O
DOOHHH!!!!!!!!!!!
I think the proper method is to use WSASendDisconnect (or DisconnectEx for
CPs) when you're the initiator, then wait for the handshake back from a
WSARecv that returns zero bytes. I think shutdown() is similar, but I
don't recall all the details.
My stuff looks like this:
static int
IocpCloseProc (
ClientData instanceData, /* The socket to close. */
Tcl_Interp *interp) /* Unused. */
{
SocketInfo *infoPtr = (SocketInfo *) instanceData;
int errorCode = 0;
BufferInfo *bufPtr;
/*
* The core wants to close channels after the exit handler!
* Our heap is gone!
*/
if (initialized) {
/* Artificially increment the count. */
InterlockedIncrement(&infoPtr->outstandingOps);
/* Flip the bit so no new stuff can ever come in again. */
InterlockedExchange(&infoPtr->markedReady, 1);
/* Setting this means all returning operations will get
* trashed and no new operations are allowed. */
infoPtr->flags |= IOCP_CLOSING;
/* Tcl now doesn't recognize us anymore, so don't let this
* dangle. */
infoPtr->channel = NULL;
/* Remove ourselves from the readySockets list. */
IocpLLPop(&infoPtr->node, IOCP_LL_NODESTROY);
/* Remove all events queued in the event loop for this socket. */
Tcl_DeleteEvents(IocpRemovePendingEvents, infoPtr);
if (!infoPtr->acceptProc) {
/* Queue this client socket up for auto-destroy. */
bufPtr = GetBufferObj(infoPtr, 0);
PostOverlappedDisconnect(infoPtr, bufPtr);
} else {
SOCKET temp;
/* Close this listening socket directly. */
infoPtr->flags |= IOCP_CLOSABLE;
InterlockedDecrement(&infoPtr->outstandingOps);
/* collect stats */
InterlockedDecrement(&StatOpenSockets);
temp = infoPtr->socket;
infoPtr->socket = INVALID_SOCKET;
winSock.closesocket(temp);
}
}
return errorCode;
}
static DWORD
PostOverlappedDisconnect (SocketInfo *infoPtr, BufferInfo *bufPtr)
{
BOOL rc;
DWORD WSAerr;
/* Increment the outstanding overlapped count for this socket. */
InterlockedIncrement(&infoPtr->outstandingOps);
bufPtr->operation = OP_DISCONNECT;
rc = infoPtr->proto->DisconnectEx(infoPtr->socket, &bufPtr->ol,
0 /*TF_REUSE_SOCKET*/, 0);
if (rc == FALSE) {
if ((WSAerr = winSock.WSAGetLastError()) != WSA_IO_PENDING) {
bufPtr->WSAerr = WSAerr;
/*
* Eventhough we know about the error now, post this to the
* port manually, anyways.
*/
PostQueuedCompletionStatus(IocpSubSystem.port, 0,
(ULONG_PTR) infoPtr, &bufPtr->ol);
return NO_ERROR;
}
} else {
/* The DisconnectEx completed now and is queued to the port. */
__asm nop;
}
return NO_ERROR;
}
Use this for NT4:
BOOL PASCAL
OurDisonnectEx (
SOCKET hSocket,
LPOVERLAPPED lpOverlapped,
DWORD dwFlags,
DWORD reserved)
{
BufferInfo *bufPtr;
bufPtr = CONTAINING_RECORD(lpOverlapped, BufferInfo, ol);
winSock.WSASendDisconnect(hSocket, NULL);
PostQueuedCompletionStatus(IocpSubSystem.port, 0,
(ULONG_PTR) bufPtr->parent, lpOverlapped);
winSock.WSASetLastError(WSA_IO_PENDING);
return FALSE;
}
The worker thread that handles the GQCS does this:
static void
HandleIo (
register SocketInfo *infoPtr,
register BufferInfo *bufPtr,
HANDLE CompPort,
DWORD bytes,
DWORD WSAerr,
DWORD flags)
{
if (WSAerr == WSA_OPERATION_ABORTED) {
/* Reclaim cancelled overlapped buffer objects. */
FreeBufferObj(bufPtr);
goto done;
}
switch (bufPtr->operation) {
....
case OP_READ:
....
if (bytes > 0) {
....
} else if (infoPtr->flags & IOCP_CLOSING) {
infoPtr->flags |= IOCP_CLOSABLE;
FreeBufferObj(bufPtr);
break;
} else if ...
case OP_DISCONNECT:
/* remove the extra ref count. */
InterlockedDecrement(&infoPtr->outstandingOps);
infoPtr->flags |= IOCP_CLOSABLE;
FreeBufferObj(bufPtr);
break;
}
done:
if (InterlockedDecrement(&infoPtr->outstandingOps) <= 0
&& infoPtr->flags & IOCP_CLOSABLE) {
/* This is the last operation. */
FreeSocketInfo(infoPtr);
}
}
Notice I decrement the extra ref count in the OP_DISCONNECT case to make
sure it ends up being the last. Notice the use of the IOCP_CLOSING and
IOCP_CLOSABLE flags. Also notice in IocpCloseProc, I don't use a
WaitForSingleObject call to block for all refs to come back. I found that
a WaitForSingleObject was *EXTREMELY* unforgiving for performance, and it
was better to just "let it go" to do an auto-destroy. I found it best to
delete the per-socket struct in the completion thread itself to avoid any
threading issues where both the worker and the main collide during a
closure (it was happening to me for a time before I realized how stupid I
was).
Yeah, SD_BOTH works.
I was remembering a situation on NT in which the server calls shutdown(
SD_SEND ), and if the client failed to respond to the zero byte completions
it gets and just sits there without doing anything. The pending operations
would still exist on the server...
I think I now realize that I need to call shutown(), and let the send/recv
events drain before calling close().
Thanks again,
-Randy
-Randy
"SenderX" <x...@xxx.com> wrote in message
news:FmPmc.39095$_41.3594344@attbi_s02...