Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CreateTimerQueueTimer fails unexpectedly

229 views
Skip to first unread message

Kursat

unread,
Apr 26, 2010, 12:11:08 PM4/26/10
to
Hi,

I am using timer queue timers as timeout provider for overlapped Winsock
operations. I use CreateTimerQueueTimer and DeleteTimerQueueTimer with
default timer queue. CreateTimerQueueTimer failed with ERROR_INVALID_HANDLE
once. I could not reproduce the error but it happened for sure. There is no
information about the failure on the web. My question is; in which
circumstances this function fails with ERROR_INVALID_HANDLE?

Thanks in advance


Alexandre Grigoriev

unread,
Apr 27, 2010, 12:39:04 AM4/27/10
to

When a handle argument is incorrect. The only handle argument in that
function is TimerQueue, which is optional. If it's not NULL or a valid timer
queue handle, you'll get ERROR_INVALID_HANDLE.

"Kursat" <x...@yy.com> wrote in message
news:#nUcasV5...@TK2MSFTNGP04.phx.gbl...

Kursat

unread,
Apr 27, 2010, 1:49:01 AM4/27/10
to

"Alexandre Grigoriev" wrote:

> .
>

As said in the original post, I use default timer queue which means that I
pass NULL as TimerQueue handle. It is very strange failure, no handle passed
but it complains of invalid handle.

Hector Santos

unread,
Apr 27, 2010, 7:36:02 AM4/27/10
to
Kursat,

Probably a small snippet of code showing basically how you are doing
this because I can't duplicate it. However, my quick test has no
association with sockets, which to me, is pretty odd as to why and
when to employ this logic. I never can across a need to do something
like this with overlapping socket I/O. Are you trying to program some
sort of socket I/O timeout?

I'm winging it but it sounds like you are "pulling the rug from within
threads feet" prematurely or out of sync and an invalid handle is
created. If all you are seeking here is to create an asynchronous
socket I/O timeout design, then why not simply use select().

--
HLS


Kursat wrote:

>
> "Alexandre Grigoriev" wrote:
>
>> When a handle argument is incorrect. The only handle argument in that
>> function is TimerQueue, which is optional. If it's not NULL or a valid timer
>> queue handle, you'll get ERROR_INVALID_HANDLE.
>>
>> "Kursat" <x...@yy.com> wrote in message
>> news:#nUcasV5...@TK2MSFTNGP04.phx.gbl...
>>> Hi,
>>>
>>> I am using timer queue timers as timeout provider for overlapped Winsock
>>> operations. I use CreateTimerQueueTimer and DeleteTimerQueueTimer with
>>> default timer queue. CreateTimerQueueTimer failed with
>>> ERROR_INVALID_HANDLE once. I could not reproduce the error but it happened
>>> for sure. There is no information about the failure on the web. My
>>> question is; in which circumstances this function fails with
>>> ERROR_INVALID_HANDLE?
>>>
>>> Thanks in advance
>

Kursat

unread,
Apr 27, 2010, 11:48:50 AM4/27/10
to

"Hector Santos" <sant...@nospam.gmail.com> wrote in message
news:OuUqk3f5...@TK2MSFTNGP06.phx.gbl...

> Kursat,
>
> Probably a small snippet of code showing basically how you are doing this
> because I can't duplicate it. However, my quick test has no association
> with sockets, which to me, is pretty odd as to why and when to employ this
> logic. I never can across a need to do something like this with
> overlapping socket I/O. Are you trying to program some sort of socket I/O
> timeout?
>
> I'm winging it but it sounds like you are "pulling the rug from within
> threads feet" prematurely or out of sync and an invalid handle is created.
> If all you are seeking here is to create an asynchronous socket I/O
> timeout design, then why not simply use select().
>
> --
> HLS

Hi Hector,

Yes, I use timer queue timers for socket IO timeout. I can not figure out
why overlapped socket operations don't have any timeout mechanism by
themselves but this is another story. I don't use select() because I am
using IO completion ports.


Hector Santos

unread,
Apr 27, 2010, 3:47:37 PM4/27/10
to
Kursat wrote:

What are the timeouts for? Outgoing connections?

Does your CreateTimerQueueTimer() fail always, the first time? Are
you recreating timers? At they periodic, one shots?

Anyway, I could not find of a situation where the function failed.

--
HLS

Kursat

unread,
Apr 27, 2010, 4:48:54 PM4/27/10
to

"Hector Santos" <sant...@nospam.gmail.com> wrote in message
news:er2fRKk5...@TK2MSFTNGP06.phx.gbl...

The logic is very complex but, in essence;
I issue an overlapped WSARecv() for a socket. I send a command to a device
over the same socket and expect response in a certain time interval.
Because overlapped WSARecv() has no integrated timeout mechanism and it only
completes when some data is available for read, I should keep track of time
so that I can inform clients if the respond does not arrive in the desired
time period. So, I create a timer whenever I send a command. If response
comes in the desired time period then I delete the timer and send response
to registered clients. Otherwise the timer expires then I know that the
command has timed out and act accordingly. I developed a test application
for stress-testing this logic. The application simply sends commands as fast
as possible and generates a trace file about timeout and IO completion
behaviors. In that file I saw that CreateTimerQueueTimer () failed with
ERROR_INVALID_HANDLE once. I tried to reproduce the same situation but it
has not happened again. The logic seems working well now, but I suspect that
this subtle error points to a bug in my code so I want to know if there is a
certain situation in which CreateTimerQueueTimer() fails with
ERROR_INVALID_HANDLE so that I can review my logic against that situation. A
Microsoft guy who knows internals of timer queue API may explain the issue,
in the end it is nothing but a function and in its implementation there
should be something like this;

if (something_goes_wrong)
{
SetLastError (ERROR_INVALID_HANDLE);
return FALSE;
}

I want to know what goes wrong.


m

unread,
Apr 27, 2010, 8:37:40 PM4/27/10
to
I am not sure what protocol you are attempting to implement, but from your
description I infer that it is UDP based and you intend to communicate with
a _single_ remote host. IO completion ports are designed and optimized to
be used by applications issuing many overlapped IO operations from a few
threads and IOOP timeouts are canonically incidental for this paradigm.
Consider that if the IOOPs are file IO, then timeout is irrelevant; and if
stream socket based, then a transport issue; and if datagram socket based,
conflict with the stateless completion processing model that underlies IOCP.
For cases where the protocol is very complex, a dedicated thread sync IO
model often works well, but if you opt for stateless IO, then you must
implement application protocol level timeouts using an out-of-band mechanism
(ie a timer) that will trigger a logical close.

There are many examples of IOCP based servers on the net.


"Kursat" <x...@yy.com> wrote in message

news:uVB#Rsk5K...@TK2MSFTNGP04.phx.gbl...

Hector Santos

unread,
Apr 28, 2010, 12:37:22 AM4/28/10
to
Kursat wrote:

> The logic is very complex but, in essence;
> I issue an overlapped WSARecv() for a socket. I send a command to a device
> over the same socket and expect response in a certain time interval.
> Because overlapped WSARecv() has no integrated timeout mechanism and it only
> completes when some data is available for read, I should keep track of time

> so that I can inform clients if the respond does not arrive in the
> desired time period.

If this is the sole reason for using the timer queue, then its not a
good one IMTO. If you are using overlapping I/O with WSARecv(), you
can couple this with

WSAWaitForMultipleEvents() (w/ non-infinite timeouts)
WSAGetOverlappedResult()

to introduce your own (efficient) polling timeout.

Roughly,

// Send Command

// Receive Response

HANDLE hOvrEvent = WSACreateEvent();
WSABUF buf[1] = {0};
buf[0].len = 4*1024;
buf[0].buf = new char[buf[i].len];

while (some loop)
{
DWORD dwRead = 0;
DWORD dwFlags = 0;
WSAOVERLAPPED ovr = {0};
ovr.hEvent = hOvrEvent;
if (WSARecv(hSocket,
buf,
MAX_WSABUF,
&dwRead,
&dwFlags,
&ovr,
NULL) == SOCKET_ERROR) {

if (WSAGetLastError() == WSA_IO_PENDING) {
WSAEVENT evts[2] = {0};
evts[0] = hOvrEvent;
evts[1] = hSomerGlobalTerminateEvent;

// 5 second idle TIMEOUT for receive

switch(WSAWaitForMultipleEvents(1,evts,5000,FALSE))
{
case WAIT_OBJECT_0:
DWORD iof = 0;
if (WSAGetOverlappedResult(hSocket,&ovr,
&dwRead,TRUE,&iof)) {
// process your data
} else {
// socket closed?
// break out of loop
}
break

case WAIT_OBJECT_0+1:
// global terminal event
// Break out of loop
break;

case WAIT_TIMEOUT:
// IDLE TIMEOUT
// Break out of loop
break;
}
} else {
// perhaps socket closed?
break;
}
}
}

The above works very efficiently. In general, for a receive, an idle
timeout is what you are looking for. No out of band timer required,
unless as M indicated, you want to use this in some fashion to
invalidate a handle to break out of the above. It might be perhaps a
global terminate event set by the main thread or for some reason
invalidate the socket.

Note: You can make the wait be 1 second perhaps, and have a
dwIdleCount++ in WAIT_TIMEOUT and when it reached X, you about the
reading. The dwIdleCount is reset to 0 with WAIT_OBJECT_0 event.

That allows you to make sensitive to some other monitoring in the
loop, like a ABORT button.

--
HLS

Kursat

unread,
Apr 28, 2010, 2:04:01 AM4/28/10
to
Hi m,

I use TCP not UDP and, in fact, timeout mechanism I mentioned is not related
to the transport layer. It is my command-response timeout. The application
communicates with some numbers of embedded devices over TCP. When I send a
command to a device, it must respond in, let's say, 2 seconds. For some
reasons, it may not be able to respond as fast as I expect. This is not
related to the physical or logical link's status. I should use a mechanism to
determine if a command has been responded in the desired time period. If so
then I process the received data, otherwise I suppose that the command has
timed out. I use IOCP because the server communicates with many devices and
performance and scalability are desired. I implemented same server with
serial communication and everything was easier because I can set timeouts for
serial communication and if no data comes in timeout period then a zero byte
ReadFile() completion occurs on IOCP thus I know there is no response. From
this explanation, is it appropriate using timer queue timers?

"m" wrote:

> .
>

Kursat

unread,
Apr 28, 2010, 2:21:01 AM4/28/10
to
"Hector Santos" wrote:

> .
>

But I use IO Completion Ports and have designed everything for it. What you
recommend is a totally different approach and not as efficient/scalable as IO
Completion ports are.

Hector Santos

unread,
Apr 28, 2010, 3:10:04 AM4/28/10
to
Kursat wrote:


I wasn't suggestion to move away from IOCP. Besides the point IOCP
provides you with the efficient thread pooling and I/O mechanism to
minimum networking I/O which can be duplicated just as efficiently
using none IOCP frameworks, what is common to all is that you still
need handle the IOCP and/or overlapping I/O *incomplete* issues.

In other words, what you are really trying to do is "change" the
built-in timeouts so when you call:

GetQueuedCompletionStatus()

it may wait efficiently, but it will wait forever too unless you give
it a timeout.

What I noting the mechanism is already there. A timer queue is
redundant IMO and as you found out, adds complexity and uncertainty to
what is already is a complex IOCP model concept to begin with.

But then of course, if that is what you want to use, I'm sure you will
figure out what appears to be an synchronization bug where some handle
is getting clobbered, closed prematurely or not releases, whatever. :)

--
HLS

m

unread,
Apr 28, 2010, 9:22:57 PM4/28/10
to
Ah, it all starts to make sense now ... Coming from a serial port model,
where the port is always open, but the commands time-out, you have
structured your TCP server the same way. This is why I though you were
using UDP - it has the same control semantic

For the most scalable design, you should use stateless processing for both
your IO completion and timeout logic. IOCP + timer queues is a good choice,
but a key point is that when the timer runs, calling shutdown with SD_BOTH
will abort any pending IO and trigger the cleanup logic you have already
written to deal with socket disconnect. This will prompt a disconnect from
the remote host, and presumably a reconnect, that should reset the state. I
assume that is what you want to do because the device has 'malfunctioned'.
This design closely couples the logical state of the connection with the
logical state of your communications with the device and works well for
almost all situations. A more advanced design, to specifically compensate
for unreliable networks, is to introduce a logical distinction between the
TCP connection state and the logical communications state, but that is
_much_ more complicated to implement and I doubt that it is necessary in
your case.

Also be aware that to achieve optimal performance from overlapped IO, you
should have multiple reads pending concurrently and use gather logic across
the received buffers to construct the application protocol. Again i doubt
that this is necessary in your application, but it can reduce the UM-KM
transitions and buffering in the TCP stack (non-paged pool usage + memory
coppies).

"Kursat" <Kur...@discussions.microsoft.com> wrote in message
news:23791FC8-F61C-45EE...@microsoft.com...

Kursat

unread,
Apr 30, 2010, 6:27:01 AM4/30/10
to
Hi m,

Thanks for the handy information.

In my case, there is a converter device to which the application connects.
Behind the converter there is a half-duplex RS-485 network. Therefore,
although there is only one TCP connection established between the application
and the converter, the application communicates with several devices over
that connection(of course there may also be several converters on the network
which means several times several devices:)). I simply send all messages to
the same endpoint for a single converter and the converter broadcasts the
message to its RS-485 network. Because the message contains an address, only
the targeted device processes the message, others simply discard it. So, if
I send a command to a device and the device does not respond, this means only
that device is unavailable. I must keep the connection alive to be able to
talk to other devices. So, in case of timeout I don't call shutdown(),
instead I mark the command as "timed-out" and inform the registered client(s)
about it. I only call shutdown() if WSASend() or WSARecv() fails and the
failure forces to do so.

By the way, I could not get what you mean saying "stateless processing"?


"m" wrote:

> .
>

m

unread,
Apr 30, 2010, 9:31:53 PM4/30/10
to

Okay, so you are multiplexing communication with several devices over a
single TCP connection as well as having multiple TCP connections to various
controllers. Depending on the capabilities of the controllers, you may or
may not want to use overlapped IO at all, but assuming that they can accept
commands for multiple devices simultaneously, and responses can arrive at
arbitrary times from the devices, then IOCP is still your best design. Note
that this has nothing to do with the physical nature of the network the
controller is using to communicate with these devices, but on the logical
design of the controller.

Assuming that this is one of those more complex cases, then you _must_
abstract the IOOPs from the logical commands and responses. The IOOPs will
read or write arbitrary length data to or from the TCP stream. When sending
this is easy since you can build a whole command and send it in a single
write, but on read you must buffer the read results and parse out the
responses. Once the data for a response has been identified, then you need
to determine which command it belongs to - this is where statefull /
stateless processing comes in. In statefull, you will lookup the 'active'
command on that connection and process the response, whereas in stateless
processing, some attribute of the response will associate it with one of the
pending commands on the connection. The nomenclature is confusing, but the
principal is easy enough ;) The thread that is processing the IO completion
either does or does not need to know anything about the global state of the
connection. After that, there is lots of choice as to how the responses are
processed, but you can see that the whole issue of IOOP timeout is
irrelevant since you aren't necessarily expecting to receive any data at any
particular time and the connection is only closed if the network is broken
or there is a problem with the controller or message framing on the TCP
stream.

For this situation IOCP + Timer queues is a good design. In the simpler
case where the controller can only accept a single command at a time, IOCP
only provides scale out for the number of controllers that you can talk to,
but is still a valid design


"Kursat" <Kur...@discussions.microsoft.com> wrote in message

news:5CAB2E01-CB69-4D21...@microsoft.com...

Kursat

unread,
May 1, 2010, 6:54:22 AM5/1/10
to
Thank you very much m.


0 new messages