I know what's diffence between SEM_Q_PRIORITY and SEM_Q_FIFO.
Using SEM_Q_FIFO in semBCreate, the task which has waited the longest
will get the semaphore.
but I found something wrong or unacceptible situation.
#include <stdio.h>
#include "semLib.h"
#include "taskLib.h" /* vxWorks */
#include "sysLib.h" /* vxWorks */
SEM_ID sem1;
void test1() {
for(int i=0; i<5; i++){
semTake(sem1, WAIT_FOREVER);
printf("test1\n");
semGive(sem1);
}
}
void test2() {
for(int i=0; i<5; i++){
semTake(sem1, WAIT_FOREVER);
printf("test2\n");
semGive(sem1);
}
}
int main(){
sem1 = semBCreate (SEM_Q_FIFO, SEM_FULL);
taskSpawn ("test1", 100, 0, 20000, (FUNCPTR)
test1,0,0,0,0,0,0,0,0,0,0);
taskSpawn ("test2", 100, 0, 20000, (FUNCPTR)
test2,0,0,0,0,0,0,0,0,0,0);
return 0;
}
the result :
=======================================================
-> main
value = 0 = 0x0
-> test1
test1
test1
test1
test1
test2
test2
test2
test2
test2
I expect test1, test2, test1, test2 becase I use fifo queue.
Does anyone has idea about this. please let me know.
Regards,
Alexander
test2's semTake is never executed until test1 thread end.
Becase there's no context switching point.
think about this code.
void test1() {
for(int i=0; i<5; i++){
semTake(sem1, WAIT_FOREVER);
if(i==0) taskDelay(0); // Here
printf("test1\n");
semGive(sem1);
}
}
"taskDelay(0) call" makes context switch to test2.
and then test2's semTake is executed and test2 is inserted to wait
queue.
test1's semGive wakes up the task waiting in wait queue (test2).
Finally, we get test1, test2, test1, test2 ....
Is it right ?
I changed for to while.. I got a same result.
only test1, test1, test1, .....
void test1() {
while(1){
semTake(sem1, WAIT_FOREVER);
printf("test1\n");
semGive(sem1);
}
}
void test2() {
while(1){
Hi, I think the problem is the initialization of the semaphore. If you
create it as SEM_FULL, it will be available to the first one that
takes it. So this is what happen:
- sem1 is available
- task1 tries to take it and succeds, so it prints its messages and
gives again sem1
- next iterations of task1 will be so fast that task1 will finish
before task2 ever has started.
You should initialize sem1 as SEM_EMPTY. The two tasks will pend for
sem1 at the beginning of the loop. Then you should add a semGive
instruction at the end of you main to unlock the tasks. You should see
task1 as the first to be unlocked because it has pended for a longer
time ( being the first to be spawned).
The correct code for main should be:
int main(){
sem1 = semBCreate (SEM_Q_FIFO, SEM_EMPTY);
taskSpawn ("test1", 100, 0, 20000, (FUNCPTR)
test1,0,0,0,0,0,0,0,0,0,0);
taskSpawn ("test2", 100, 0, 20000, (FUNCPTR)
test2,0,0,0,0,0,0,0,0,0,0);
semGive(sem1);
return 0;
}
Let me know, bye!
Giacomo
The two tasks (test1 and test2) were at the same priority. If I
remember correctly, the default scheduling is round-robin. This means
that until test1 blocks (or possibly is scheduled out due to a higher
priority task coming in), test2 will not run.
Therefore one must induce a context switch to get the desired test1
test2 test1 test2 ... result. One way to do this is (as pointed
out earlier) to call taskDelay (0). This is a perfectly acceptable
solution so long as test2 is ready to run when taskDelay(0) is called.
In another example, it was suggested to initialize the binary
semaphore to empty and then give the semaphore. This too is a
perfectly acceptable solution provided that both test1 and test2
haveare pending on that semaphore before it is given.
I just wanted to draw attention to the underlying assumptions in the
examples. Their bite has caused me some frustration in the past, and
I just wanted to help others avoid their bite in the future.
On Feb 19, 6:37 am, "benelli.giac...@gmail.com"
result :
test1
test1
test1
test1
test1
test2
...
I think we need to guarantee test1, test2 execute each semTake.
sem1 = semBCreate (SEM_Q_FIFO, SEM_EMPTY);
taskSpawn ("test1", 100, 0, 20000,
(FUNCPTR)test1,0,0,0,0,0,0,0,0,0,0);
taskSpawn ("test2", 100, 0, 20000,
(FUNCPTR)test2,0,0,0,0,0,0,0,0,0,0);
taskDelay(10);
semGive(sem1);
return 0;
}
it works fine as what we expected.
(FUNCPTR)test2,0,0,0,0,0,0,0,0,0,0);
semGive(sem1);
return 0;
}
result :
test1
test1
test1
test1
test1
test2
...
I think we need to guarantee test1, test2 execute each semTake.
sem1 = semBCreate (SEM_Q_FIFO, SEM_EMPTY);
taskSpawn ("test1", 100, 0, 20000,
(FUNCPTR)test1,0,0,0,0,0,0,0,0,0,0);
taskSpawn ("test2", 100, 0, 20000,
Hi,
I think,
test1
test1
test1
test2
test2
test2
this output is correct only.
Since, test1 task has got enough time to print upto 3 to 4 times.
After the time period of test1, then OS schedules the tasks, based on
the Priority, semphaore waiting.
After the test1, then test2 is waiting for long time for semaphore.So
it gets executed.
-Thanks
C.Premnath
Hi,
Try to give a delay after semgive in task 1.This will make task1 to
pend for that semaphore.Remember that binary semaphores cannot be
claimed recursively as with mutex.Further in your code,you did not set
the round robin time slice.By default in Vxworks 5.4 or 5.x,if you
dont give a time slice,the scheduling will be FIFO and not round
robin.This is the reason why you are not getting expected result.Try
specifying a time slice and let me know.Also you can try providing a
delay after semgive of task1.
Regards,
s.subbarayan
Hi,
By default, vxWorks has Priority Based Preemptive Scheduling and
Round Robin is off. Hence out of equal priority tasks, the order of
excution is FIFO.
So always test1 will complete its job then alone test2 will get its
chance of execution.In this particular case, even if we enable
kernelTimeSlice, we may not, (mostly will not) find the change in
sequence of print statements as 5 prints would take very short period,
and can be finished even before test2 gets its chance. Hence if "the
order of prints" is important , we have two options:
1.Append kernelTimeSlice and introduce delay in both the tasks say of
30 ticks, after the print statement.
This logic would lead to race conditions when the loop limit
increases, as it is not possible to predict accurately how much time
each print takes.
2.Current logic is using single semaphore. Kind of mutual exclusion.
Better than that prefer synchronisation using two semaphores.
Cheers,
KK.