This routine disables task context switching. The task that
calls this routine will be the only task that is allowed to
execute, unless the task explicitly gives up the CPU by
making itself no longer ready.
What happens when the task next gets switched in after it becomes
ready again? Is it then in the taskLock()ed state? I had assumed
so, but I'm seeing behaviour that makes me wonder ...
I'm running a driver that is using logMsg() from its interrupt
routine. It looks like logMsg()'s logging task is subsequently
getting switched in even though my task that was running at the
time of the interrupt had called taskLock() some time earlier
(and perhaps given up the processor voluntarily since).
Is anyone aware of any undocumented interaction between logMsg()
and taskLock()?
I'm aware that taskLock() is not highly recommended. I'm in the
first stage of porting a driver from a traditional UNIX-like kernel,
and taskLock() appears to be a good way to emulate UNIX kernel
semantics for the time being.
>
> I'm running a driver that is using logMsg() from its interrupt
> routine. It looks like logMsg()'s logging task is subsequently
> getting switched in even though my task that was running at the
> time of the interrupt had called taskLock() some time earlier
> (and perhaps given up the processor voluntarily since).
>
How can you say that your task is running when interrupt comes. You
can be only sure when after calling taskLock you are doing some
computation in infinite loop. But I am sure that you are not doing
that.
So what I suspect is, after calling taskLock your task leaves the CPU
before the interrupt comes. If you are doing logMsg in interrupt
handler then logMsg assumes task running on CPU before interrupt as
calling task.
> Is anyone aware of any undocumented interaction between logMsg()
> and taskLock()?
>
> I'm aware that taskLock() is not highly recommended. I'm in the
> first stage of porting a driver from a traditional UNIX-like kernel,
> and taskLock() appears to be a good way to emulate UNIX kernel
> semantics for the time being.
-rajendra
> taskLock() is defined as
>
> This routine disables task context switching. The task that
> calls this routine will be the only task that is allowed to
> execute, unless the task explicitly gives up the CPU by
> making itself no longer ready.
>
> What happens when the task next gets switched in after it becomes
> ready again? Is it then in the taskLock()ed state? I had assumed
> so, but I'm seeing behaviour that makes me wonder ...
taskLock() is done on a task-by-task basis. If a task, t1, does a
taskLock(), then becomes not ready (e.g. taskDelay(), semTake(), etc.), it
will be switched out, and the scheduler will choose the highest priority
ready task, t2, to run. Once in t2, scheduling will happen normally
(unless, of course, t2 has also done a taskLock()). When t1 eventually
runs again, the taskLock will once again be in effect until t1 becomes not
ready or does a taskUnlock().
> I'm running a driver that is using logMsg() from its interrupt
> routine. It looks like logMsg()'s logging task is subsequently
> getting switched in even though my task that was running at the
> time of the interrupt had called taskLock() some time earlier
> (and perhaps given up the processor voluntarily since).
My suspicion is that your taskLock'd task is not actually the one that was
running at the moment of the interupt. I would suggest using WindView to
gain insight into the exact timing of events.
Keith
--
mv ~karner /loony/bin
rajendra...@yahoo.com (rajendra) wrote in message news:<479ba395.0207...@posting.google.com>...
> j...@bcs.org.uk (J. J. Farrell) wrote in message news:<5c04bc56.02071...@posting.google.com>...
> > taskLock() is defined as
> >
> > This routine disables task context switching. The task that
> > calls this routine will be the only task that is allowed to
> > execute, unless the task explicitly gives up the CPU by
> > making itself no longer ready.
> >
> > What happens when the task next gets switched in after it becomes
> > ready again? Is it then in the taskLock()ed state? I had assumed
> > so, but I'm seeing behaviour that makes me wonder ...
> No, It will not.
Actually, it will be. When the task resumes execution the task lock
will be back in place. The lock is held in a counter in the TCB, so it
will be in place all the time the task is running. Also, since it is a
count, you will need the same number of calls to taskUnlock() as were
made to taskLock() to remove it.
As for the original problem, I suspect Rajendra's summary is correct,
and the interrupt is in fact coming during a phase when the 'locked'
task is blocked by its own actions. Which architecture are we talking
about here?
HTH,
John...
Sorry, I expressed myself poorly - the bit about my task running
is a red herring. The only way I can see at the moment for the
system to get into this strange state is if an interrupt has been
taken while I believe I am protected by both taskLock() and
intLock(). It is possible that my task has voluntarily given up
the CPU since it last called taskLock(). It struck me that I had
made an assumption about the behaviour of taskLock() in this case,
and the possible alternative behaviour would explain my problem.
If my task were no longer taskLock()ed, it could have been switched
out which would have re-enabled interrupts. An interrupt arriving
before it was switched back in would explain the situation - the
behaviour would be "as if" my task were running at the time, without
interrupts blocked.
> So what I suspect is, after calling taskLock your task leaves the CPU
> before the interrupt comes. If you are doing logMsg in interrupt
> handler then logMsg assumes task running on CPU before interrupt as
> calling task.
I don't believe the situation I'm seeing can arise if it gets switched
out at any of the expected possible rescheduling points.
Thanks for your input.
Thanks John. Rajendra's answer would explain my problem, but I must
admit that yours describes what I would expect to happen. I'm fairly
new to VxWorks - is there any reasonable way to hack my code to look
at the current state of taskLock() and intLock() for diagnostic
purposes? How do I get a pointer to the TCB, and is its layout
documented anywhere - or can anyone tell me whereabouts in it to
find the taskLock() state? And the same for intLock()?
> As for the original problem, I suspect Rajendra's summary is correct,
> and the interrupt is in fact coming during a phase when the 'locked'
> task is blocked by its own actions.
As explained in answer to Rajendra, that wouldn't explain the problem.
Sorry for my poor description that led everyone down this path.
> Which architecture are we talking about here?
MIPS.
Thanks for your help.
> Thanks John. Rajendra's answer would explain my problem, but I must
> admit that yours describes what I would expect to happen. I'm fairly
> new to VxWorks - is there any reasonable way to hack my code to look
> at the current state of taskLock() and intLock() for diagnostic
> purposes? How do I get a pointer to the TCB, and is its layout
> documented anywhere - or can anyone tell me whereabouts in it to
> find the taskLock() state? And the same for intLock()?
For diagnostic purposes you can assume the following:
1) The task ID is the pointer to the TCB
2) The TCB structure is defined in taskLib.h (look for WIND_TCB)
3) The task lock counter is an element of the TCB called lockCnt.
As for the interrupt lock state, that is a little harder to determine
since it is arch specific. I don't know much about MIPS, but I believe
that the SR register holds this information. You can get to the SR
register through the TCB as well, but its value there could be stale
(especially since you are locked, so there should have been no need to
save the state into the TCB ;-). My guess is that you will need to
craft a small piece of assembler to check this for you.
Another thing to consider is that on some architectures making system
calls (e.g. semGive) with interrupts locked has the side effect of
re-enabling interrupts. This is probably a bug, but there is a
recommendation in the programmer's guide that system calls are not
made while interrupts are locked. I know this affects X86 based
systems; I do not know about MIPS ones. What calls are you making
while interrupts are locked?
HTH,
John...
Thanks John - that's very likely to be it. I should have thought of
that, especially since I now remember reading that warning. I'm too
used to OSes that don't have this bug (sorry, feature). The current
rough port makes a lot of VxWorks calls with interrupts locked. It
looks like I'm going to have to do the more sophisticated version
sooner than I had expected ...
I'll use the other information you provided to check what's happening.
Thanks for your very helpful responses.