I need to collect data from the serial port every 8ms since the device
connected to the port works at 120Hz. This doesn't seems to work properly
due to the 10ms time slice interval of a normal i386 kernel. The
communication to the serial port is done in user space at the moment and is
not loaded as a module. I read a lot in the history of the newsgroup and
there are some solutions recommended. From low latency kernel patches to the
use of real-time Linux. But I'm pretty new to Linux and feel not comfortable
by patching the kernel since I don't know how a low latency path affects
other programs on the system.
It is important that I get the data nearly every 8 ms because I don't want
the serial port buffer to overflow and loose data. But the use of real-time
Linux seems to be a overhead to me.
What about soft-real-time? I looked at the sched* functions, but could'nt
get it running as fast as I want.
What about using the rtc?
Is it also possible to run the application at that frequency to work with
the data I get?
Thank you very much,
Jens Schumacher
You don't say how much data is produce at these 8ms intervals,
but lets look at some actual numbers and reconsider what you are
contemplating.
First, 9600 bps is pretty slow for a serial port, right? Yet if
you divide 1 second by 9600 bits you'll find that there is a bit
arriving every 0.000104 seconds! Hence, a serial port in 8ms
can grab almost 76.8 bits (or 7.68 bytes using 10 bit character
frames) at 9600 bps.
Most folks run serial ports at 57.6kbps (46 bytes in 8ms) or
even at 115.2kbps (92 bytes every 8ms).
>due to the 10ms time slice interval of a normal i386 kernel. The
There is the basic error. A normal i386 kernel doesn't use 10ms
time slices. Regardless, how fast an interupt is handled or
perhaps how fast a context switch can be done is probably more
indicative of how fast data can be buffered from a serial port.
That depends on the cpu, but an example that I found with a
quick web search indicated that a 2.4GHz P4 has interupt
overhead (for Linux) of something like 3 microseconds, and
significantly less for a context switch.
Clearly any modern system you are likely to use is going to be
quite able to handle data arriving every 8 milliseconds at
reasonable data rates.
>communication to the serial port is done in user space at the moment and is
>not loaded as a module. I read a lot in the history of the newsgroup and
>there are some solutions recommended. From low latency kernel patches to the
>use of real-time Linux. But I'm pretty new to Linux and feel not comfortable
>by patching the kernel since I don't know how a low latency path affects
>other programs on the system.
>
>It is important that I get the data nearly every 8 ms because I don't want
>the serial port buffer to overflow and loose data. But the use of real-time
>Linux seems to be a overhead to me.
This appears to be a simple case of determining the minimum bit
rate that will handle the number of bytes you want to download,
and then using the common methods of reading data from a serial
port.
data bytes minimum bit rate
1.92 2400
7.68 9600
15.36 19200
30.72 38400
46.08 57600
92.16 115200
184.32 230400
If your serial port will be receiving only 1 byte every 8ms, even
at 2400 bps you have a significant margin.
Hence, none of the methods you are considering seem necessary.
On the other hand, if your device generates more data than the
fastest rate your serial ports are capable of, then some other
form of transfer mechanism (ethernet, ATM, etc.) is necessry.
>What about soft-real-time? I looked at the sched* functions, but could'nt
>get it running as fast as I want.
>What about using the rtc?
>
>Is it also possible to run the application at that frequency to work with
>the data I get?
>
>Thank you very much,
>
>
>Jens Schumacher
>
--
Floyd L. Davidson <http://web.newsguy.com/floyd_davidson>
Ukpeagvik (Barrow, Alaska) fl...@barrow.com
Hello Floyd,
First of all thanks for your answer.
> You don't say how much data is produce at these 8ms intervals,
My device is running at 38400 bps and one Frame of data is 24 Byte.
I read the data from the serial port with a timer function.
I measured the time between the timer calls and recognized that it takes in
a mean 10ms or longer. The problem is to read the data fast enough from the
buffer to avoid a overflow and data loss.
> There is the basic error. A normal i386 kernel doesn't use 10ms
> time slices. Regardless, how fast an interupt is handled or
> perhaps how fast a context switch can be done is probably more
> indicative of how fast data can be buffered from a serial port.
> That depends on the cpu, but an example that I found with a
> quick web search indicated that a 2.4GHz P4 has interupt
> overhead (for Linux) of something like 3 microseconds, and
> significantly less for a context switch.
>
> Clearly any modern system you are likely to use is going to be
> quite able to handle data arriving every 8 milliseconds at
> reasonable data rates.
But how comes that, even when I trigger the timer as fast as possible, I
don't get better results.
Maybe many of the threats I read are already outdated. But the time slice of
10 ms seems to be still a issue in modern PC's. I made some tests with
simple timer functions and the rate was around 10ms even when I set it to 8.
I used usleep() and gettimeofday() to make this measurements.
Thank you,
Jens
>>due to the 10ms time slice interval of a normal i386 kernel. The
>
> There is the basic error. A normal i386 kernel doesn't use 10ms
> time slices.
It does use a 10ms system interrupt, and that is therefore the basic
granularity when it comes to scheduling user tasks -- and he said he's doing
the communications in user space.
> Regardless, how fast an interupt is handled or perhaps how fast a context
> switch can be done is probably more indicative of how fast data can be
> buffered from a serial port. That depends on the cpu, but an example that I
> found with a quick web search indicated that a 2.4GHz P4 has interupt
> overhead (for Linux) of something like 3 microseconds, and significantly
> less for a context switch.
>
> Clearly any modern system you are likely to use is going to be
> quite able to handle data arriving every 8 milliseconds at
> reasonable data rates.
The serial port's ISR handling is certinaly up to the task. The problem is
that there's no way to tell (from user space) when each byte arrived at the
UART (assuming that's something he cares about, the OP was a bit vague).
--
Grant Edwards grante Yow! Where's th' DAFFY
at DUCK EXHIBIT??
visi.com
> My device is running at 38400 bps and one Frame of data is 24 Byte.
>
> I read the data from the serial port with a timer function.
Don't use a timer function.
> I measured the time between the timer calls and recognized that it takes in
> a mean 10ms or longer. The problem is to read the data fast enough from the
> buffer to avoid a overflow and data loss.
Then just call read() and ask for the number of bytes in a frame. You don't
need to wait between calls to read() -- it will wait for the data.
> But how comes that, even when I trigger the timer as fast as possible, I
> don't get better results.
Because the kernel's timer interrupt is 10ms, and that is therefore the
granularity of task scheduling. The real question is why are you sleeping?
> Maybe many of the threats I read are already outdated. But the time slice
> of 10 ms seems to be still a issue in modern PC's. I made some tests with
> simple timer functions and the rate was around 10ms even when I set it to
> 8. I used usleep() and gettimeofday() to make this measurements.
Yup. IIRC, the 2.6 IA32 kernel is going to switch to a 1KHz interrupt, but
for now its 100Hz. I think it always has been 1KHz on Alpha.
I used the timer because I though it takes to much performance to call the
read function in a loop. And I used it to synchronize the serial port with
my application. Do you think I should run a loop to read from the port all
the time? Is there a big difference between running this driver in user
space or as a module?
>> But how comes that, even when I trigger the timer as fast as possible, I
>> don't get better results.
>
> Because the kernel's timer interrupt is 10ms, and that is therefore the
> granularity of task scheduling. The real question is why are you sleeping?
I need a timer to load the data in the application and to process it.
I thought I could do this with the same rate of the device. But due to the
time slice of 10 ms it is not possible, or is it? What about changing the
kernels timer interrupt? Any experience if this affects other programs?
I just need the data from the serial when a whole frame arrives every 8ms
and need to process the data directly. I never thought this would be a
problem on a modern system.
Thank you very much.
Jens
>> Then just call read() and ask for the number of bytes in a frame. You
>> don't need to wait between calls to read() -- it will wait for the data.
>
> I used the timer because I though it takes to much performance to call the
> read function in a loop.
It's not a problem as long as you use a blocking read. Your process will
sleep until the requested amount of data is received.
> And I used it to synchronize the serial port with my application.
Don't know what that means.
> Do you think I should run a loop to read from the port all the time?
Yes.
> Is there a big difference between running this driver in user space or as a
> module?
Yes. There's a big difference. Writing a kernel module is a whole
different kettle of fish. Jdging by the questions you're asking you would
have a lot to learn if you wanted to write one. That's not to say that you
couldn't, but it would be a lot of work. If you _do_ want to do it, what
you probably want to do is write a "tty line discipline" module.
>>> But how comes that, even when I trigger the timer as fast as possible, I
>>> don't get better results.
>>
>> Because the kernel's timer interrupt is 10ms, and that is therefore the
>> granularity of task scheduling. The real question is why are you sleeping?
>
> I need a timer to load the data in the application and to process it.
I still don't understand why you need a timer. Why not just call read() and
process the data when it arrives.
> I thought I could do this with the same rate of the device. But due to the
> time slice of 10 ms it is not possible, or is it
First: don't use the phrase "time slice". It means something else. What
you're talking about is the kernel clock interrupt rate. The C macro
containing the number of interrupts per second is "HZ" (it's 100 on 2.4 IA32
systems) some sometimes you'll see people use phrases like "change the HZ
value" to describe what you're talking about.
Second: yes, changing it would be a lot of work. You would have to read
through all the kernel modules your system uses and make sure they don't
depend on the assumption that HZ==100.
> ? What about changing the kernels timer interrupt?
That's what we are discussing.
> Any experience if this affects other programs?
Other device drivers are what will be affected. Other program _might_ be
affected, but they'd have to be pretty broken to start with.
> I just need the data from the serial when a whole frame arrives every 8ms
> and need to process the data directly. I never thought this would be a
> problem on a modern system.
I hate to ask agian, but why not just call read() and tell it you want 24
bytes? Your program will sleep until 24 bytes have been received, then
read() will return.
> > Then just call read() and ask for the number of bytes in a frame. You
don't
> > need to wait between calls to read() -- it will wait for the data.
> I used the timer because I though it takes to much performance to call the
> read function in a loop. And I used it to synchronize the serial port with
> my application. Do you think I should run a loop to read from the port all
> the time? Is there a big difference between running this driver in user
> space or as a module?
You could loop on a blocking read. You could also use 'poll' to wait up
to a certain amount of time. The key is that each time you call 'read', you
*must* read all the data that's available before sleeping.
Do not try to teach the kernel that 24 bytes means something special to
you. It doesn't care. If you get 8 bytes, wonderful. You'll get 16 more
later. If you get 36 bytes, fine, process 24 of them now and save 12 for the
next time you get some data.
> >> But how comes that, even when I trigger the timer as fast as possible,
I
> >> don't get better results.
> >
> > Because the kernel's timer interrupt is 10ms, and that is therefore the
> > granularity of task scheduling. The real question is why are you
sleeping?
> I need a timer to load the data in the application and to process it.
No, you don't. You read the data when it's ready, not at some magic
time. You can use 'poll' to tell when there's data ready or you can use a
blocking read. Here's what you're doing:
1) Wait X time
2) Read Y bytes
3) Repeat.
Here's what you should do:
1) Wait X time or block in 'poll' until data is available.
2) Read however many bytes there are.
3) Repeat.
> I just need the data from the serial when a whole frame arrives every 8ms
> and need to process the data directly. I never thought this would be a
> problem on a modern system.
It's not a problem, you'd just trying to teach the kernel that 8
milliseconds and 24 bytes are special. It doesn't care and so won't do what
you want it to do. Just read however much data it has whenever it has it and
you'll be fine.
You can even do this:
1) Wait 8 milliseconds (though it will probably really be ten)
2) Read as many bytes as are ready without blocking.
3) Process as many complete frames as you now have saving any leftover
bytes for the next pass.
DS
My head is swirling with confusion. I seek clarity in this matter. You wish
to collect data, and not have the buffer overflow.
The last time I checked on this matter, that was exactly the purpose of the
kernel serial driver, serial.o.
So I'm very confused as to why you are attempting to reinvent a wheel that
rolls quite well.
Now it may be another matter if you in fact need to time the receipt of each
byte, but that wasn't clear from the description above.
Since you are newbie, and I'm confused, would you mind if we started over?
Here's what I got from above:
1) Device sends data via the serial port.
2) You need to collect the data.
3) You do not want to lose any data.
But there's a bunch missing:
1) What bit rate is the data transmitted?
2) Does the data need to be timestamped?
3) Is the data bursty or does it come in a steady stream?
4) What makes you think that the data will overrun the serial port and kernel
driver?
In the grand scheme of things, the serial port is a low speed device. The
current Linux serial port driver can receive and buffer a 460k continuous
stream without breaking a sweat. It isn't the old DOS BIOS serial driver.
-What about soft-real-time? I looked at the sched* functions, but could'nt
-get it running as fast as I want.
-What about using the rtc?
-
-Is it also possible to run the application at that frequency to work with
-the data I get?
I'm still trying to figure out what you require from the OS. Care to explain
the goals of the application?
BAJ
>> And I used it to synchronize the serial port with my application.
>
> Don't know what that means.
>
This means that right after I called the function to get the data, I call
another function to handle the data. Seems not to be the best solution.
>> Is there a big difference between running this driver in user space or as a
>> module?
>
> Yes. There's a big difference. Writing a kernel module is a whole
> different kettle of fish. Jdging by the questions you're asking you would
> have a lot to learn if you wanted to write one. That's not to say that you
> couldn't, but it would be a lot of work. If you _do_ want to do it, what
> you probably want to do is write a "tty line discipline" module.
Ok, so I should stay in userspace, since I have not a much time for the
project.
>> I need a timer to load the data in the application and to process it.
>
> I still don't understand why you need a timer. Why not just call read() and
> process the data when it arrives.
>
I programmed in C++. So there is there is one Object which handles the
serial port connection and puts the data in some variables. There is another
Object with a graphical interface which displays the data and draws a graph
etc. To get the data, I call a function in the "serial port object" which
returns me the current values. And this is done with a timer. Maybe I should
change the object hierarchy, but I thought it is nice to have a independent
serial port object.
>> I thought I could do this with the same rate of the device. But due to the
>> time slice of 10 ms it is not possible, or is it
>
> First: don't use the phrase "time slice".
Just read that many people used this phrase...didn't know that this is not
common.
>
> Second: yes, changing it would be a lot of work. You would have to read
> through all the kernel modules your system uses and make sure they don't
> depend on the assumption that HZ==100.
Yes thought so...but looks like a easy solution on the first sight.
> I hate to ask agian, but why not just call read() and tell it you want 24
> bytes? Your program will sleep until 24 bytes have been received, then
> read() will return.
The read function returns the number of bytes it managed to read from the
buffer, doesn't matter how many bytes you want to read. So, as far is I
know, there is nothing like a sleep until 24 bytes are received, or is
there? Don't know any other nice solution without keeping the cpu busy.
One way would be to ask in a while if 24 bytes are in the buffer and then
call the read function. But I'm not sure if this is a nice solution.
Thanks,
Jens Schumacher
> The read function returns the number of bytes it managed to read from the
> buffer, doesn't matter how many bytes you want to read.
Read never returns more than you request, but it can return less.
> So, as far is I
> know, there is nothing like a sleep until 24 bytes are received, or is
> there?
No, but you can ask it to sleep until *some* bytes are ready, then read
up to 24 of them.
> Don't know any other nice solution without keeping the cpu busy.
> One way would be to ask in a while if 24 bytes are in the buffer and then
> call the read function. But I'm not sure if this is a nice solution.
Consider this (psuedo-) code:
int nread = 0;
while (nread < 24) {
int r = read(fd, buffer+nread, 24-nread);
if (r < 0)
/* error occured */
if (r == 0)
/* file was closed */
nread += r;
}
/* buffer now holds a 24 byte record from the serial port */
Worst case, if the read call returns each byte seperately, is 24 system
calls; between each call it will sleep when no data is available. If
your program falls behind, the kernel will buffer up the incoming data
and return all 24 bytes to you next time, in only one system call. Even
in the worst case, however, your program will probably be sleeping most
of the time.
Unless you have very strict timing requirements (i.e. each packet must
be processed within a certain amount of time), I think you are making
this more complicated than it needs to be.
--
Andrew
> Since you are newbie, and I'm confused, would you mind if we started over?
>
> Here's what I got from above:
>
> 1) Device sends data via the serial port.
> 2) You need to collect the data.
> 3) You do not want to lose any data.
Thatąs right
> But there's a bunch missing:
>
> 1) What bit rate is the data transmitted?
The bit rate is 38400 bps
> 2) Does the data need to be timestamped?
Not really necessarily
> 3) Is the data bursty or does it come in a steady stream?
The data comes in a steady stream with 120 Hz
> 4) What makes you think that the data will overrun the serial port and kernel
> driver?
The Code at the moment calls the read function with a timer. If this timer
is not triggered every 8ms, the next data comes to the buffer and the old
data is still there...if this happens very often the buffer overflows and I
loose data.
> I'm still trying to figure out what you require from the OS. Care to explain
> the goals of the application?
Here my application goals in detail.
1) I use an Eye-tracking System which operates on the serial port with
38400bps.
2) The Eyetracker sends a package of 24bytes at 120Hz to the serial port.
3) I have to read, proceed and display this data. And this as fast as
possible since I need to detect very fast eye movements.
Do you need more datail?
I could explain my current implementation, but not sure if it's helpful. But
if you need more details about this I can describe it.
Thank you all so much.
Jens Schumacher
Of course it's the best solution.
-
->> Is there a big difference between running this driver in user space or as a
->> module?
->
-> Yes. There's a big difference. Writing a kernel module is a whole
-> different kettle of fish. Jdging by the questions you're asking you would
-> have a lot to learn if you wanted to write one. That's not to say that you
-> couldn't, but it would be a lot of work. If you _do_ want to do it, what
-> you probably want to do is write a "tty line discipline" module.
-
-Ok, so I should stay in userspace, since I have not a much time for the
-project.
Correct. Everything you need is accessible from user space.
->> I need a timer to load the data in the application and to process it.
->
-> I still don't understand why you need a timer. Why not just call read() and
-> process the data when it arrives.
->
-I programmed in C++. So there is there is one Object which handles the
-serial port connection and puts the data in some variables. There is another
-Object with a graphical interface which displays the data and draws a graph
-etc. To get the data, I call a function in the "serial port object" which
-returns me the current values. And this is done with a timer. Maybe I should
-change the object hierarchy, but I thought it is nice to have a independent
-serial port object.
That organization is fine. You simply want to poll the serial object for
new data. Let the serial driver handle the receipt of the data.
-> Second: yes, changing it would be a lot of work. You would have to read
-> through all the kernel modules your system uses and make sure they don't
-> depend on the assumption that HZ==100.
-
-Yes thought so...but looks like a easy solution on the first sight.
Not worth the energy to accomplish this.
-
-> I hate to ask agian, but why not just call read() and tell it you want 24
-> bytes? Your program will sleep until 24 bytes have been received, then
-> read() will return.
-
-The read function returns the number of bytes it managed to read from the
-buffer, doesn't matter how many bytes you want to read. So, as far is I
-know, there is nothing like a sleep until 24 bytes are received, or is
-there? Don't know any other nice solution without keeping the cpu busy.
-One way would be to ask in a while if 24 bytes are in the buffer and then
-call the read function. But I'm not sure if this is a nice solution.
You'll have to loop that yourself. But since you know data is going to show
every 8 ms, you can simply timestamp the last time you read the port, and
if more than 8ms has elapsed, read the new data.
But there certainly isn't a need for you to attempt real time management of
the serial port. The kernel driver will handle the port. All you need to do
is talk to the driver.
BAJ
> You could loop on a blocking read. You could also use 'poll' to wait up
> to a certain amount of time. The key is that each time you call 'read', you
> *must* read all the data that's available before sleeping.
When I specify the blocking in the open function, read returns as late as
there are 24 bytes available, is this right?
> Do not try to teach the kernel that 24 bytes means something special to
> you. It doesn't care. If you get 8 bytes, wonderful. You'll get 16 more
> later. If you get 36 bytes, fine, process 24 of them now and save 12 for the
> next time you get some data.
I meet some Problems when I tried this. I tried to read the data and checked
how many bytes were read. When I had less than 24 I read again and asked for
the missing bytes. But somehow some bits were corrupt and gave me wrong
numbers.
This is why I checked the bytes in the buffer first and when there are 24
bytes I read them.
> No, you don't. You read the data when it's ready, not at some magic
> time. You can use 'poll' to tell when there's data ready or you can use a
> blocking read. Here's what you're doing:
>
> 1) Wait X time
> 2) Read Y bytes
> 3) Repeat.
>
> Here's what you should do:
>
> 1) Wait X time or block in 'poll' until data is available.
> 2) Read however many bytes there are.
> 3) Repeat.
1) Sounds reasonable to me...but how to block it. Sorry maybe a stupid
question.
2&3) I meet a problem described above.
>> I just need the data from the serial when a whole frame arrives every 8ms
>> and need to process the data directly. I never thought this would be a
>> problem on a modern system.
>
> It's not a problem, you'd just trying to teach the kernel that 8
> milliseconds and 24 bytes are special. It doesn't care and so won't do what
> you want it to do. Just read however much data it has whenever it has it and
> you'll be fine.
>
> You can even do this:
>
> 1) Wait 8 milliseconds (though it will probably really be ten)
> 2) Read as many bytes as are ready without blocking.
> 3) Process as many complete frames as you now have saving any leftover
> bytes for the next pass.
Then I will have another delay before I can process the data. I try to avoid
any delay like this.
>
> DS
>
>
>> Do not try to teach the kernel that 24 bytes means something
>> special to you. It doesn't care. If you get 8 bytes,
>> wonderful. You'll get 16 more later. If you get 36 bytes,
>> fine, process 24 of them now and save 12 for the next time you
>> get some data.
>
> I meet some Problems when I tried this. I tried to read the
> data and checked how many bytes were read. When I had less
> than 24 I read again and asked for the missing bytes. But
> somehow some bits were corrupt and gave me wrong numbers.
Then the baud rate is wrong, or the parity is wrong, or you're
out-of-sync with the frame boundaries, or something else is
wrong.
The serial driver has a 4KB receive FIFO. It's not going to
loose data unless you stop reading, and it's not going to
corrupt bits unless you have the serial port set incorrectly.
> This is why I checked the bytes in the buffer first and when
> there are 24 bytes I read them.
You're making this way harder than it really is. Just call
read() until you have 24 bytes and then process them. If
you're having trouble syncing up with the incoming data stream
when the program starts, then solve that problem somehow in the
processing of the data. You're not going to be able to use the
9ms periodicity of the data to sync on unless you do a lot of
work in kernel-mode.
>> No, you don't. You read the data when it's ready, not at some
>> magic time. You can use 'poll' to tell when there's data ready
>> or you can use a blocking read. Here's what you're doing:
>>
>> 1) Wait X time
>> 2) Read Y bytes
>> 3) Repeat.
>>
>> Here's what you should do:
>>
>> 1) Wait X time or block in 'poll' until data is available.
>> 2) Read however many bytes there are.
>> 3) Repeat.
When he said Wait X time, I presume he meant as a way of
detecting when the data stops -- so that your program doesn't
sit forever.
> 1) Sounds reasonable to me...but how to block it. Sorry maybe a stupid
> question.
By default, a read() will block if there is no data available.
If you want to have a timeout on the blocking, use select() or
poll().
> 2&3) I meet a problem described above.
Then there's something else wrong.
>>> I just need the data from the serial when a whole frame
>>> arrives every 8ms and need to process the data directly. I
>>> never thought this would be a problem on a modern system.
>>
>> It's not a problem, you'd just trying to teach the kernel that
>> 8 milliseconds and 24 bytes are special. It doesn't care and
>> so won't do what you want it to do. Just read however much
>> data it has whenever it has it and you'll be fine.
>>
>> You can even do this:
>>
>> 1) Wait 8 milliseconds (though it will probably really be ten)
>> 2) Read as many bytes as are ready without blocking.
>> 3) Process as many complete frames as you now have saving any leftover
>> bytes for the next pass.
>
> Then I will have another delay before I can process the data.
> I try to avoid any delay like this.
Don't delay. Don't call usleep(). Just call select/poll to
wait for data or call read() in blocking mode (which is the
default). Then just process the data stream as it arrives.
--
Grant Edwards grante Yow! Make me look like
at LINDA RONSTADT again!!
visi.com
Cool.
-
-> But there's a bunch missing:
->
-> 1) What bit rate is the data transmitted?
-The bit rate is 38400 bps
Slow.
-
-> 2) Does the data need to be timestamped?
-Not really necessarily
Great.
-
-> 3) Is the data bursty or does it come in a steady stream?
-The data comes in a steady stream with 120 Hz
Ultra Slow.
-
-> 4) What makes you think that the data will overrun the serial port and kernel
-> driver?
-The Code at the moment calls the read function with a timer. If this timer
-is not triggered every 8ms, the next data comes to the buffer and the old
-data is still there...if this happens very often the buffer overflows and I
-loose data.
But that's if you're handling the task in user space by hand, right? I mean
you are making inpb calls to read directly from the serial port, correct?
That's what I got from your original message when you said "user space and
no module is loaded."
No. That's not right. You state above "... calls the read function with a
timer." So that means you are using the serial driver. OK. Question answered
here so I'm moving on...
-
-> I'm still trying to figure out what you require from the OS. Care to explain
-> the goals of the application?
-
-Here my application goals in detail.
-
-1) I use an Eye-tracking System which operates on the serial port with
-38400bps.
OK.
-2) The Eyetracker sends a package of 24bytes at 120Hz to the serial port.
No problem.
-3) I have to read, proceed and display this data. And this as fast as
-possible since I need to detect very fast eye movements.
OK. So the application is serial port driven then right? Is there anything else
that needs to happen between between these events? If not then the advise you
have been getting in this thread is correct: loop on the read until you have
24 bytes, update everything, then go back to the top of the event loop and
wait for the next 24 bytes from the ET.
This is classic event programming. No timer is required.
-
-Do you need more datail?
Nope. It's clear now.
-I could explain my current implementation, but not sure if it's helpful. But
-if you need more details about this I can describe it.
Unless something else is going on, you need not worry about having a timer.
Your process will sleep on a read of the serial port if nothing is ready.
A normal blocking read will only return when it has at least 1 byte of data.
-
-
-Thank you all so much.
No problem.
BAJ
> -I programmed in C++. So there is there is one Object which handles the
> -serial port connection and puts the data in some variables. There is another
> -Object with a graphical interface which displays the data and draws a graph
> -etc. To get the data, I call a function in the "serial port object" which
> -returns me the current values. And this is done with a timer. Maybe I should
> -change the object hierarchy, but I thought it is nice to have a independent
> -serial port object.
>
> That organization is fine. You simply want to poll the serial object for
> new data. Let the serial driver handle the receipt of the data.
OK got this, but there is the problem. How can I poll the data with a
frequency of 120Hz. When I use a timer, the best case I can get is a
temporal resolution of 100Hz, right?
> -The read function returns the number of bytes it managed to read from the
> -buffer, doesn't matter how many bytes you want to read. So, as far is I
> -know, there is nothing like a sleep until 24 bytes are received, or is
> -there? Don't know any other nice solution without keeping the cpu busy.
> -One way would be to ask in a while if 24 bytes are in the buffer and then
> -call the read function. But I'm not sure if this is a nice solution.
>
> You'll have to loop that yourself. But since you know data is going to show
> every 8 ms, you can simply timestamp the last time you read the port, and
> if more than 8ms has elapsed, read the new data.
But how can I trigger the event. If I insert a sleep I'm limited again to
the 100Hz Kernel frequency. And as described the delay should be as minimal
as possible. Can I use this timestamp without running into this 100Hz limit?
>
> But there certainly isn't a need for you to attempt real time management of
> the serial port. The kernel driver will handle the port. All you need to do
> is talk to the driver.
This is all I want. Would be nice if I could do this that easily. Maybe I
just think to complicated at the moment. But I'm really confused.
Thanks
Jens Schumacher
> OK got this, but there is the problem. How can I poll the data with a
> frequency of 120Hz.
You don't. Just call read() until you have enough data to
process. Then process it. Then start calling read() again.
Forget about the whole 120Hz business. That's completely
hidden from you by the OS and device driver. All you're going get is a
stream of bytes that you acquire by calling read().
> When I use a timer, the best case I can get is a
> temporal resolution of 100Hz, right?
Right. Forget about timing. Forget about 100Hz. Forget about
120Hz. Forget about timing.
>> You'll have to loop that yourself. But since you know data is going to show
>> every 8 ms, you can simply timestamp the last time you read the port, and
>> if more than 8ms has elapsed, read the new data.
>
> But how can I trigger the event.
What event??
1) Call read() in a loop until you've got enough data to do
something. If you're in-sync, that's 24 bytes. If you're
not in sync yet, it's 47 bytes (minimum number of bytes you
can read and be guaranteed that you have a complete frame).
2) Do something with the data.
3) Goto 1)
You'll need to be able to detect where frames are in the data
stream. Hopefully there's a start-of-record byte and/or an
end-of-record byte.
--
Grant Edwards grante Yow! Is this an out-take
at from the "BRADY BUNCH"?
visi.com
Just some questions left...
>> I meet some Problems when I tried this. I tried to read the
>> data and checked how many bytes were read. When I had less
>> than 24 I read again and asked for the missing bytes. But
>> somehow some bits were corrupt and gave me wrong numbers.
>
> Then the baud rate is wrong, or the parity is wrong, or you're
> out-of-sync with the frame boundaries, or something else is
> wrong.
Baud rate is definitely correct, parity to and I check the frame boundaries
with a frambit which is one at the beginning of every frame. Don't know
what's wrong there...I will double check this with my reimplementation.
> By default, a read() will block if there is no data available.
> If you want to have a timeout on the blocking, use select() or
> poll().
Does read() block until it read as many bytes as defined in the function
call or just blocks until it read some bytes, no matter how many? Couldn't
find information about this in man.
> Don't delay. Don't call usleep(). Just call select/poll to
> wait for data or call read() in blocking mode (which is the
> default). Then just process the data stream as it arrives.
I will try this. Should I rather use select/poll than read()? Are there any
advantages?
Just one last question:
Any suggestions how to get the data to from the "serial port object" to the
"application object". I like to keep them as separated as possible...this is
why I triggered a timer in the "application object" to get the data
available in the "serial port object".
And again thank you very much I'm very grateful for your help offered to me.
Jens Schumacher
>> By default, a read() will block if there is no data available.
>> If you want to have a timeout on the blocking, use select() or
>> poll().
>
> Does read() block until it read as many bytes as defined in
> the function call or just blocks until it read some bytes, no
> matter how many? Couldn't find information about this in man.
It will usually wait until it has read the number of bytes that
were requested. However, that's not guaranteed by the read()
system call API, so to be safe, you should call it in a loop
until you have accumulated "enough" bytes.
The cases where a driver will return less than the requested
amount of data to a read() call vary depending on the exact
device in question, and I don't know what those conditions are
for the tty line discipline module (which is what your dealing
with when you call read() on a serial port).
> I will try this. Should I rather use select/poll than read()?
> Are there any advantages?
The advantages of select/poll are:
1) You can use them to wait for data on multiple file
descriptors (serial ports, TCP connections, etc.)
2) You can specify a "timeout" value so the call will return
after a specified time even if there is no data available.
If you use select/poll, they only tell you when there is some
data -- you still have to call read() to get the data.
> Just one last question: Any suggestions how to get the data to
> from the "serial port object" to the "application object". I
> like to keep them as separated as possible...this is why I
> triggered a timer in the "application object" to get the data
> available in the "serial port object".
I would have the "serial port object" call an "application
object" method and pass the data to the method as a parameter.
> And again thank you very much I'm very grateful for your help
> offered to me.
No problem.
--
Grant Edwards grante Yow! Where's the Coke
at machine? Tell me a joke!!
visi.com
You don't poll using a timer. You simply sleep waiting for data from the
serial port. The receipt of the data will trigger the continuation of the
application. No timer is necessary.
-
-> -The read function returns the number of bytes it managed to read from the
-> -buffer, doesn't matter how many bytes you want to read. So, as far is I
-> -know, there is nothing like a sleep until 24 bytes are received, or is
-> -there? Don't know any other nice solution without keeping the cpu busy.
-> -One way would be to ask in a while if 24 bytes are in the buffer and then
-> -call the read function. But I'm not sure if this is a nice solution.
->
-> You'll have to loop that yourself. But since you know data is going to show
-> every 8 ms, you can simply timestamp the last time you read the port, and
-> if more than 8ms has elapsed, read the new data.
-
-But how can I trigger the event. If I insert a sleep I'm limited again to
-the 100Hz Kernel frequency. And as described the delay should be as minimal
-as possible. Can I use this timestamp without running into this 100Hz limit?
You can do this by not sleeping, or by sleeping on something that isn't
subject to the timer resolution, like the serial port.
-> But there certainly isn't a need for you to attempt real time management of
-> the serial port. The kernel driver will handle the port. All you need to do
-> is talk to the driver.
-
-This is all I want. Would be nice if I could do this that easily. Maybe I
-just think to complicated at the moment. But I'm really confused.
Keep it simple: just read the serial port and run the rest of your program
after you read 24 bytes of data. Everything else will follow fine from there.
BAJ
by...@cc.gatech.edu (Byron A Jeff) writes:
> In article <BBC472F9.1C9E%jens.sc...@gmx.net>,
> Jens Schumacher <jens.sc...@gmx.net> wrote:
<...>
> -> 4) What makes you think that the data will overrun the serial
> -> port and kernel driver?
>
> -The Code at the moment calls the read function with a timer. If this timer
> -is not triggered every 8ms, the next data comes to the buffer and the old
> -data is still there...if this happens very often the buffer overflows and I
> -loose data.
>
> But that's if you're handling the task in user space by hand, right? I mean
> you are making inpb calls to read directly from the serial port, correct?
> That's what I got from your original message when you said "user space and
> no module is loaded."
>
> No. That's not right. You state above "... calls the read function with a
> timer." So that means you are using the serial driver. OK. Question answered
> here so I'm moving on...
Maybe there's something misunterstood. Jens, are you talking about the
systemcall 'read()' or about some read()-method of your 'serial
object' ? Did you write that class yourself? And if not: do you have
access to the sourcecode?
Maybe you should post some code just to let us know we're talking
about the same thing.
Daniel.
> Maybe there's something misunterstood. Jens, are you talking about the
> systemcall 'read()' or about some read()-method of your 'serial
> object' ? Did you write that class yourself? And if not: do you have
> access to the sourcecode?
>
> Maybe you should post some code just to let us know we're talking
> about the same thing.
>
> Daniel.
I changed the Code today and call the read() in a loop.
Here is my current code:
void update ()
{
long t1, t2, t;
char buf[255];
// Start timer
gettimeofday( &tv, NULL );
t1 = tv.tv_sec*1000 + tv.tv_usec/1000;
while(1){
int bytes = 0;
int res = 0;
int sofar = 0;
int fails = 0;
// check if at least one byte is available in the buffer
while(bytes < 1){
bytes = serial_check_buffer(fileDesc);
fails++;
}
// read one byte from the buffer.
// if byte contains framebit: continue
while((res = serial_read(fileDesc, buf, 1)) == 1 && !(buf[0] & 0x80)){
fails++;
printf("FAILED TO DETECT FRAMEBIT, look at the next bit\n");
}
/* read the remaining 23 bytes from the buffer*/
sofar = 1;
while (sofar < 24){
if ((res = serial_read(fileDesc, &(buf[sofar]), 24 - sofar)) <= 0){
; // do nothing
}
else{
sofar += res;
}
}
/* check again if it's really the framebit on pos1 in the buffer */
if(buf[0] & 0x80){
/* do some stuff with the data */
handleData (buf);
}
else{
printf("NO FRAMEBIT FOUND. Data invalid\n");
}
// get the time the function needed to run.
gettimeofday( &tv, NULL );
t2 = tv.tv_sec*1000 + tv.tv_usec/1000; /* ms */
t = t2 - t1;
printf("delay: %d ms\n\n", t );
}
}
The time I measure is mostly 10ms. Sometimes I get 0ms.
How comes that it takes exactly 10 ms and not 8 or 11?
Is this due to the frequency of the kernel?
And another problem is that my gui ( writen in qt ) doesn't start when I
start the while loop. Takes the polling for the serial port to much cpu
time?
Thanks, Jens
> I changed the Code today and call the read() in a loop.
> Here is my current code:
>
> void update ()
> {
> long t1, t2, t;
> char buf[255];
>
> // Start timer
> gettimeofday( &tv, NULL );
> t1 = tv.tv_sec*1000 + tv.tv_usec/1000;
>
> while(1){
>
> int bytes = 0;
> int res = 0;
> int sofar = 0;
> int fails = 0;
> // check if at least one byte is available in the buffer
> while(bytes < 1){
> bytes = serial_check_buffer(fileDesc);
> fails++;
> }
Why are you doing this serial_check_buffer thing? It's just wasting CPU
time spinning in a while() loop.
> // read one byte from the buffer.
> // if byte contains framebit: continue
> while((res = serial_read(fileDesc, buf, 1)) == 1 && !(buf[0] & 0x80)){
> fails++;
> printf("FAILED TO DETECT FRAMEBIT, look at the next bit\n");
> }
>
> /* read the remaining 23 bytes from the buffer*/
> sofar = 1;
> while (sofar < 24){
> if ((res = serial_read(fileDesc, &(buf[sofar]), 24 - sofar)) <= 0){
> ; // do nothing
If you get here, and error happened -- you should probably abort.
> }
> else{
> sofar += res;
> }
> }
> /* check again if it's really the framebit on pos1 in the buffer */
> if(buf[0] & 0x80){
> /* do some stuff with the data */
> handleData (buf);
> }
> else{
> printf("NO FRAMEBIT FOUND. Data invalid\n");
> }
>
> // get the time the function needed to run.
> gettimeofday( &tv, NULL );
> t2 = tv.tv_sec*1000 + tv.tv_usec/1000; /* ms */
> t = t2 - t1;
> printf("delay: %d ms\n\n", t );
>
>
> }
> }
> The time I measure is mostly 10ms. Sometimes I get 0ms.
> How comes that it takes exactly 10 ms and not 8 or 11?
> Is this due to the frequency of the kernel?
Yes. The system timer tick in current IA32 kernels is 100Hz or 10ms. That
is the granularity with which tasks get scheduled.
> And another problem is that my gui ( writen in qt ) doesn't start when I
> start the while loop. Takes the polling for the serial port to much cpu
> time?
Possibly. Get rid of the busy-wait loop that's calling serial_check_buffer.
It's doing nothin but wasting CPU time.
I presume you're doing the GUI stuff in another thread so that it runs
independently from the thread that's reading and processing data?
--
> In article <BBC580FE.1CDB%jens.sc...@gmx.net>, Jens Schumacher wrote:
>> // check if at least one byte is available in the buffer
>> while(bytes < 1){
>> bytes = serial_check_buffer(fileDesc);
>> fails++;
>> }
>
> Why are you doing this serial_check_buffer thing? It's just wasting CPU
> time spinning in a while() loop.
Is there a difference between running in a while loop and asking how many
bytes are available in the buffer and running in a while loop until a
specific amount of bytes was read? Like in the following part? :
>> // read one byte from the buffer.
>> // if byte contains framebit: continue
>> while((res = serial_read(fileDesc, buf, 1)) == 1 && !(buf[0] & 0x80)){
>> printf("FAILED TO DETECT FRAMEBIT, look at the next bit\n");
>> }
>>
>> /* read the remaining 23 bytes from the buffer*/
>> sofar = 1;
>> while (sofar < 24){
>> if ((res = serial_read(fileDesc, &(buf[sofar]), 24 - sofar)) <= 0){
>> ; // do nothing
>
> If you get here, and error happened -- you should probably abort.
Why should I abort? Meet a problem that read() sometimes returns -1 after I
overflowed the buffer.
>
>> The time I measure is mostly 10ms. Sometimes I get 0ms.
>> How comes that it takes exactly 10 ms and not 8 or 11?
>> Is this due to the frequency of the kernel?
>
> Yes. The system timer tick in current IA32 kernels is 100Hz or 10ms. That
> is the granularity with which tasks get scheduled.
But this is what I want to avoid...I need the data faster without an extra
delay of 2ms. Is there no way to get the data faster?
Thanks,
Jens Schumacher
>> Why are you doing this serial_check_buffer thing? It's just wasting CPU
>> time spinning in a while() loop.
>
> Is there a difference between running in a while loop and asking how many
> bytes are available in the buffer and running in a while loop until a
> specific amount of bytes was read? Like in the following part? :
Yes.
I _assume_ that serial_read() is calling read(). read() will block and
susplend the process until data is ready. the ioctl() that asks how many
bytes are available returns immediately without sleeping regardless of
whether there is data available. What are serial_check_buffer() and
serail_read()?
>>> The time I measure is mostly 10ms. Sometimes I get 0ms.
>>> How comes that it takes exactly 10 ms and not 8 or 11?
>>> Is this due to the frequency of the kernel?
>>
>> Yes. The system timer tick in current IA32 kernels is 100Hz or 10ms. That
>> is the granularity with which tasks get scheduled.
>
> But this is what I want to avoid...I need the data faster without an extra
> delay of 2ms. Is there no way to get the data faster?
No.
If you really have to proces the data exactly every 8ms, then you'll have to
use a real-time operating system (RTOS). A plain Linux kernel will not work
for you.
I still don't understand why you think you need to process the data at exact
8ms intervals. What are you doing with the processed data that is so time
critical?
> I _assume_ that serial_read() is calling read(). read() will block and
> susplend the process until data is ready. the ioctl() that asks how many
> bytes are available returns immediately without sleeping regardless of
> whether there is data available. What are serial_check_buffer() and
> serail_read()?
Oh sorry, but you're right. serial_read() is calling read();
Here the serial_read():
int serial_read (int fd, char *buff, int len)
{
int c;
c = read (fd, buff, len);
// printf("serial read done. %d bytes read\n\n",c);
return c;
}
And here the serial_check_buffer():
int serial_check_buffer(int fd)
{
int bytes;
ioctl(fd, FIONREAD, &bytes);
return bytes;
}
But however, it seems that read() also returns directly. But maybe I'm
wrong...have to check this.
> No.
>
> If you really have to proces the data exactly every 8ms, then you'll have to
> use a real-time operating system (RTOS). A plain Linux kernel will not work
> for you.
>
> I still don't understand why you think you need to process the data at exact
> 8ms intervals. What are you doing with the processed data that is so time
> critical?
I'm analyzing eye-movements and in particular saccadic eye-movements. I need
to detect them online to be able to execute some experiments. Since small
saccadic eye-movements last only around 90ms I need to detect them really
fast and have to avoid delay where I can.
Jens Schumacher
> But however, it seems that read() also returns directly. But maybe I'm
> wrong...have to check this.
Unless you've set non-blocking mode on the file descriptor, it will not
return immediately unless there is data available in the receie queue.
>> If you really have to proces the data exactly every 8ms, then you'll have
>> to use a real-time operating system (RTOS). A plain Linux kernel will not
>> work for you.
>>
>> I still don't understand why you think you need to process the data at
>> exact 8ms intervals. What are you doing with the processed data that is so
>> time critical?
>
> I'm analyzing eye-movements and in particular saccadic eye-movements. I
> need to detect them online to be able to execute some experiments.
So you're controlling some sort of output device based on the input data
stream?
> Since small saccadic eye-movements last only around 90ms I need to detect
> them really fast and have to avoid delay where I can.
The data has already been captured by the serial driver. You're not going
to loose data unless you stop reading it. If you really can't afford a few
milliseconds of latency in processing the data, you're going to have to
switch to an RTOS (real time OS). Linux (or Windows) will not work.
> > You can even do this:
> >
> > 1) Wait 8 milliseconds (though it will probably really be ten)
> > 2) Read as many bytes as are ready without blocking.
> > 3) Process as many complete frames as you now have saving any leftover
> > bytes for the next pass.
>
> Then I will have another delay before I can process the data. I try to
avoid
> any delay like this.
Then *always* block in 'read'.
DS
> I'm analyzing eye-movements and in particular saccadic eye-movements. I
need
> to detect them online to be able to execute some experiments. Since small
> saccadic eye-movements last only around 90ms I need to detect them really
> fast and have to avoid delay where I can.
Your statement doesn't make any sense. How long they last doesn't say
anything about how quickly you need to detect them. So long as you get all
the data that is sent over the serial port, it doesn't seem to matter
exactly *when* you get it. If you get it 2 milliseconds later, what does
that matter? You still have the data.
Is there anything you are trying to *do* with the data that is
time-sensitive? It's hard to imagine anything you might do that needs to be
done that quickly.
You can solve your problem so many ways. You can even sleep for 20
milliseconds, then read *all* the data in the serial buffer, process it,
then loop.
DS
Let me add a few comments to what Grant has indicated here.
The only time one would want to call a function like
serial_check_buffer() is if data is arriving asynchronously at
random intervals and the program thread has something to do
while waiting for data to arrive. Moreover, if that is the case
then the only sane way to write a serial_check_buffer() function
is to include a minimal sleep, which will cause 10ms of that
timing granularity to be given to other processes!
Now, that _sounds_ exactly like what Jens wants to avoid. But
in fact, I don't think he does...
>>>> The time I measure is mostly 10ms. Sometimes I get 0ms.
Note that the reason for the 0ms on some occasions and 10ms on
most is the inability to sync a 10ms interrupt tick with an 8.333
ms data interval. What happens is every so many times there are
*two* sets of data sent within that 10ms interval. For example,
data is sent at 1 ms into it and at 9 ms into it, and then you
read at 10ms into it and get all of the data for two sets.
But the timer can't measure in microseconds, and the second data
_read_ happens literally microseconds (all within the same
time slice) after the first one. Hence the timer says there was
0ms between them, because its smallest granularity is 10ms.
But that just points out that you don't really need to read the
data every 8 ms. You could read the data every 16 ms or every
32 ms and be just fine. You just get twice or four times as
much data with each read that way.
In fact, I suspect that with 24 bytes per data packet, you could
read the data every 4000/24*.008 seconds (about every 1.3
seconds) because the kernel is going to buffer 4000 bytes before
you actually will lose any data.
>>>> How comes that it takes exactly 10 ms and not 8 or 11?
>>>> Is this due to the frequency of the kernel?
>>>
>>> Yes. The system timer tick in current IA32 kernels is 100Hz or 10ms. That
>>> is the granularity with which tasks get scheduled.
>>
>> But this is what I want to avoid...I need the data faster without an extra
>> delay of 2ms. Is there no way to get the data faster?
>
>No.
>
>If you really have to proces the data exactly every 8ms, then you'll have to
>use a real-time operating system (RTOS). A plain Linux kernel will not work
>for you.
>
>I still don't understand why you think you need to process the data at exact
>8ms intervals. What are you doing with the processed data that is so time
>critical?
My understanding of what Jens said earlier (and he can correct
me if I'm wrong here, because I might well be) is that he is
graphing the data for display, and he believes that if he
doesn't grab the data as it becomes available there will be
buffer overflows and he will lose the data.
Of course, that is not what will happen. In fact the hardware
(the serial port chip itself) is going to buffer as much as 16
bytes. Then the kernel (the device driver) is going to buffer
4000 bytes. And all of that 4000 bytes (or any portion of it)
can be read with a single call read(2) call. As I pointed out
yesterday with the bit rate chart, any rate faster than 38.4Kbps
is going to be fast enough to grab all of the data being sent.
With a 4000 byte buffer in the serial driver, you only need grab
that data more often than every 1.3 seconds to get it all.
However, if this data is being *used* for something that is time
critical, then yes there is a problem. For example, if you need
to read the data, and send feedback to the device *before* it
sends another packet of data... that is a serious real time
constraint that a standard Linux kernel is not going to provide.
--
Floyd L. Davidson <http://web.newsguy.com/floyd_davidson>
Ukpeagvik (Barrow, Alaska) fl...@barrow.com
> In article <BBC59AAC.1D12%jens.sc...@gmx.net>, Jens Schumacher wrote:
>
> Unless you've set non-blocking mode on the file descriptor, it will not
> return immediately unless there is data available in the receie queue.
Ok I check this, but should be fine.
>> I'm analyzing eye-movements and in particular saccadic eye-movements. I
>> need to detect them online to be able to execute some experiments.
>
> So you're controlling some sort of output device based on the input data
> stream?
Yes that's right. I start my experiments on the screen an display images
there. Depending on the eye-movements I change the images.
>
>> Since small saccadic eye-movements last only around 90ms I need to detect
>> them really fast and have to avoid delay where I can.
>
> The data has already been captured by the serial driver. You're not going
> to loose data unless you stop reading it. If you really can't afford a few
> milliseconds of latency in processing the data, you're going to have to
> switch to an RTOS (real time OS). Linux (or Windows) will not work.
Yes, I think you're right. Have to calculate all the latencies I already
have and see if this few milliseconds are important.
Just a short question about soft-real-timer. Can't I solve the problem with
this posix function? Why is it called soft-real-time if it can't break the
10 ms limitation.
Greetings, Jens
> Yes that's right. I start my experiments on the screen an display images
> there. Depending on the eye-movements I change the images.
You're worried about a couple milliseconds when your output is
images on the screen? You're joking... You realize that each
pixel is only updated once every 14ms or so?
>>> Since small saccadic eye-movements last only around 90ms I
>>> need to detect them really fast and have to avoid delay where
>>> I can.
>>
>> The data has already been captured by the serial driver.
>> You're not going to loose data unless you stop reading it. If
>> you really can't afford a few milliseconds of latency in
>> processing the data, you're going to have to switch to an RTOS
>> (real time OS). Linux (or Windows) will not work.
>
> Yes, I think you're right. Have to calculate all the latencies
> I already have and see if this few milliseconds are important.
>
> Just a short question about soft-real-timer. Can't I solve the problem with
> this posix function?
Dunno. Read the documentation. I doubt it.
> Why is it called soft-real-time if it can't break the 10 ms
> limitation.
Because it's soft real time? What do you think it should be
called.?
--
Grant Edwards grante Yow! My LESLIE GORE record
at is BROKEN...
visi.com
> Let me add a few comments to what Grant has indicated here.
>
> The only time one would want to call a function like
> serial_check_buffer() is if data is arriving asynchronously at
> random intervals and the program thread has something to do
> while waiting for data to arrive. Moreover, if that is the case
> then the only sane way to write a serial_check_buffer() function
> is to include a minimal sleep, which will cause 10ms of that
> timing granularity to be given to other processes!
>
> Now, that _sounds_ exactly like what Jens wants to avoid. But
> in fact, I don't think he does...
Yes sounds like you're right with this. The Problem is that I started the to
write the driver without knowing that there are limitations in the kernel.
Never meet this problems before because I never did something with hardware
and which meant to be running in specific time intervals of ms.
>>>>> The time I measure is mostly 10ms. Sometimes I get 0ms.
>
> Note that the reason for the 0ms on some occasions and 10ms on
> most is the inability to sync a 10ms interrupt tick with an 8.333
> ms data interval. What happens is every so many times there are
> *two* sets of data sent within that 10ms interval. For example,
> data is sent at 1 ms into it and at 9 ms into it, and then you
> read at 10ms into it and get all of the data for two sets.
>
> But the timer can't measure in microseconds, and the second data
> _read_ happens literally microseconds (all within the same
> time slice) after the first one. Hence the timer says there was
> 0ms between them, because its smallest granularity is 10ms.
>
Yes and no, Yes it seems that I read two frames in one interval, but you can
measure microseconds with gettimeofdate();
> But that just points out that you don't really need to read the
> data every 8 ms. You could read the data every 16 ms or every
> 32 ms and be just fine. You just get twice or four times as
> much data with each read that way.
No I can't the reason comes later...
> My understanding of what Jens said earlier (and he can correct
> me if I'm wrong here, because I might well be) is that he is
> graphing the data for display, and he believes that if he
> doesn't grab the data as it becomes available there will be
> buffer overflows and he will lose the data.
>
...basically right I want to display something on the monitor.
The reason why I have to read the data when it arrives is pretty easy.
I wrote earlier that I have to detect saccades. I'm doing this with a
5 point differentiator. Because I'm doing this online this means that I take
the current point and use the 4 data frames I got before. You see here is
already a delay of more than 2 frames for the detection.
When I'm reading the data every 32 milliseconds, and the subject starts to
make a saccadic eye-movement after 8 ms, the possibility to detect the
saccade and to react gets pretty small. This is why it's important to get
every frame of data as soon as possible.
> However, if this data is being *used* for something that is time
> critical, then yes there is a problem. For example, if you need
> to read the data, and send feedback to the device *before* it
> sends another packet of data... that is a serious real time
> constraint that a standard Linux kernel is not going to provide.
I don't need to send a feedback, but how I described, I need the data pretty
was after the eye-movement occurred.
Thank you very much,
Jens Schumacher
> You're worried about a couple milliseconds when your output is
> images on the screen? You're joking... You realize that each
> pixel is only updated once every 14ms or so?
>
Of course I realized that it takes the screen at least 14ms to update a
picture. But this is one reason more to care about the few milliseconds I
can save somewhere. Since I need 5 frames to analyze the data the 2 ms
become 10 milliseconds delay.
>> Just a short question about soft-real-timer. Can't I solve the problem with
>> this posix function?
>
> Dunno. Read the documentation. I doubt it.
Ok I will.
Thanks anyway, I think I have a clear view now of what I can do in Linux
without using RT-Modules and how to poll the serial port for data. Have to
read about soft real-time. Maybe I can solve some problems with the
scheduler functions.
Thank you very much for your help,
Jens
> Thanks anyway, I think I have a clear view now of what I can do in Linux
> without using RT-Modules and how to poll the serial port for data. Have to
> read about soft real-time. Maybe I can solve some problems with the
> scheduler functions.
You have no problem. You can run every 10 milliseconds and the screen
only updates every 14 milliseconds. What is the problem supposed to be?
DS
> Am 29/10/03 22:08 Uhr schrieb "Grant Edwards" unter <gra...@visi.com> in
> 3fa080b9$0$41286$a186...@newsreader.visi.com:
>
>> You're worried about a couple milliseconds when your output is
>> images on the screen? You're joking... You realize that each
>> pixel is only updated once every 14ms or so?
>>
> Of course I realized that it takes the screen at least 14ms to update a
> picture. But this is one reason more to care about the few milliseconds I
> can save somewhere. Since I need 5 frames to analyze the data the 2 ms
> become 10 milliseconds delay.
But no! You'll get th 1st frame 2ms (more or less) after it appeared
and also the 5th and the 500th. You mustn't add the delay.
The main timeslice you have to look at is the framerate of your video
device. Say it updates with 100hz, that means there are slices of
10ms. All you can say is: from the time you have processed a set of
data until the change gets visible there is a delay of 0 - 10ms. At
least that's all you can say in a non-realtime environment.
Or do I miss something?
>>> Just a short question about soft-real-timer. Can't I solve the
>>> problem with this posix function?
I don't know this function. Can You give me a hint?
Daniel.
>> You realize that each
>> pixel is only updated once every 14ms or so?
>>
>Of course I realized that it takes the screen at least 14ms to update a
>picture.
You can buy a good monitor and run it at 120Hz vertical rate :-)
> But this is one reason more to care about the few milliseconds I
>can save somewhere. ^^^^^^^^
You don't have to..
> Since I need 5 frames to analyze the data
True.
BUT:
1) There is no timing frame involved in analyzing:
You can do the same analysis with data replayed from the harddisk
Or with real data.
2) After you have syncronized and got at least 5 frames, all screen-
Update will be instanous to your calculation.
Just use a standard serial Port with interrupt enable'd. You have
(up to) 16 Byte buffer in the 16550 and nearly 4K in the standard
kernel-Driver. If that should not be sufficient, go for another
PCI-serialboard (in Norderstedt, less then 100 km from your town :-)
with a larger on-Uart-Fifo. And/or change the kernel-module for
a larger buffer...
And just read away those Bytes as they arrive, process them and
do not even think of any "timer"; your screen will be in sync with
the data-rate with a non-noticeable delay of 5*8 msec (less then
50 msec!)
Good luck!
And I'll add a few more.
I realize that I have the solution too. And it won't require a bunch of work
to implement. I ran into these same types of timing issues when I was building
my digital synthesizer for my PhD.
-> The only time one would want to call a function like
-> serial_check_buffer() is if data is arriving asynchronously at
-> random intervals and the program thread has something to do
-> while waiting for data to arrive.
Correct.
-> Moreover, if that is the case
-> then the only sane way to write a serial_check_buffer() function
-> is to include a minimal sleep, which will cause 10ms of that
-> timing granularity to be given to other processes!
Correct again. sleep has a grainularity of 10ms. But my question is now
does read and/or select have the same grainularity? What about usleep or
nanosleep? If so then I can see why Jens is upset.
I'm interested now, so I guess I'll spend a couple of minutes setting up
some experiments. I'll be back. OK I'm back with the first test. I have this
loop:
-------
for(i=0;i<10;i++) {
delay.tv_sec = 0;
delay.tv_usec = 100;
select(0,NULL,NULL,NULL,&delay);
TIMER_GET(ticks[i]);
}
------------
TIMER_GET is a macro that simply marks the time with gettimeofday. I have
another routine that converts into microseconds from a start time. Here are
the results:
0: 2533
1: 12367
2: 22483
3: 32533
4: 42552
5: 52388
6: 62441
7: 72364
8: 82694
9: 92360
So there is no doubt that even though we told select to delay 100 uS, that
it took 10 ms for it to get back to us. Bummer.
Here's a quick look at the sleeps [usleep, and nanosleep].
usleep is bad:
0: 16078
1: 53925
2: 66021
3: 85939
4: 105932
5: 126035
6: 145946
7: 167601
8: 185940
9: 208655
I won't bother with nanosleep because the manual page indicates that it is
subject to the 10ms timer.
There's no way to test a read using two processes because once one goes to
sleep, then the other will run for 10 ms.
So here's the solution: Drum roll please!
You have to run the process in a real time scheduling mode. It requires
very few things. Here is my sample code:
------------------------------
#include <sched.h>
main()
{
struct sched_param pri[1];
pri->sched_priority = 10;
sched_setscheduler(0,SCHED_FIFO,pri);
// Proceed as usual...
}
------------------------------
The will put the process into a real time FIFO scheduler queue. The upshot
is that whenever a real time process is running, it preempts any other
running process. This fixes the problem. Note that now the process must
be run as root, because only the superuser can put a process in a real time
scheduling mode.
Here's the updated usleep run:
0: 112
1: 214
2: 316
3: 417
4: 519
5: 620
6: 722
7: 824
8: 925
9: 1027
---------------------------------------
Just about as close to 100 uS as you can get. Here's the updated select:
0: 7195
1: 17180
2: 27177
3: 37196
4: 47195
5: 57192
6: 67190
7: 77193
8: 87182
9: 97182
----------------------------
Hmmm. No change. interesting.
But it presents a solution. But listen to the important part: If your real
time scheduled process doesn't sleep when it's waiting for something, nothing
else on the system runs! It is critical to understand that because you can
lock up your system tight as a drum running one of these.
So now let's finish the discussion with this new information...
->
-> Now, that _sounds_ exactly like what Jens wants to avoid. But
-> in fact, I don't think he does...
-
-Yes sounds like you're right with this. The Problem is that I started the to
-write the driver without knowing that there are limitations in the kernel.
Not exactly limitations in the kernel, but limitations that the kernel imposes
upon ordinary processes. Remember that Linux isn't a real time kernel, so
a bit of latency usually isn't an issue. But it is here. Note that there is
a simple solution for the problem you are trying to solve.
-Never meet this problems before because I never did something with hardware
-and which meant to be running in specific time intervals of ms.
Right. So a new paradigm had to be introduced.
-
->>>>> The time I measure is mostly 10ms. Sometimes I get 0ms.
->
-> Note that the reason for the 0ms on some occasions and 10ms on
-> most is the inability to sync a 10ms interrupt tick with an 8.333
-> ms data interval. What happens is every so many times there are
-> *two* sets of data sent within that 10ms interval. For example,
-> data is sent at 1 ms into it and at 9 ms into it, and then you
-> read at 10ms into it and get all of the data for two sets.
->
-> But the timer can't measure in microseconds, and the second data
-> _read_ happens literally microseconds (all within the same
-> time slice) after the first one. Hence the timer says there was
-> 0ms between them, because its smallest granularity is 10ms.
Not exactly correct. You sleep for 10ms if no data is there, and you
don't sleep at all on the read if data is there.
Note from the above run you don't want to use select. You want to keep running
track of the last time you did a read, sleep until 8333 uS after the last read
then read again. The difference with the real-time schedule is that as soon
as the 8333 uS elapses, the kernel will immediately preempt whatever is running
and restart your process.
->
-Yes and no, Yes it seems that I read two frames in one interval, but you can
-measure microseconds with gettimeofdate();
Right. That's how I tested this stuff.
-
-> But that just points out that you don't really need to read the
-> data every 8 ms. You could read the data every 16 ms or every
-> 32 ms and be just fine. You just get twice or four times as
-> much data with each read that way.
-
-No I can't the reason comes later...
I do understand. You are correct.
-> My understanding of what Jens said earlier (and he can correct
-> me if I'm wrong here, because I might well be) is that he is
-> graphing the data for display, and he believes that if he
-> doesn't grab the data as it becomes available there will be
-> buffer overflows and he will lose the data.
->
-...basically right I want to display something on the monitor.
-The reason why I have to read the data when it arrives is pretty easy.
-I wrote earlier that I have to detect saccades. I'm doing this with a
-5 point differentiator. Because I'm doing this online this means that I take
-the current point and use the 4 data frames I got before. You see here is
-already a delay of more than 2 frames for the detection.
Right. So as you pointed out from the beginning, it's a soft real time process.
-When I'm reading the data every 32 milliseconds, and the subject starts to
-make a saccadic eye-movement after 8 ms, the possibility to detect the
-saccade and to react gets pretty small. This is why it's important to get
-every frame of data as soon as possible.
Correct. So make the process run on the real time scheduler, Make sure that
it usleeps when it's waiting, and it'll wake up right on time.
-> However, if this data is being *used* for something that is time
-> critical, then yes there is a problem. For example, if you need
-> to read the data, and send feedback to the device *before* it
-> sends another packet of data... that is a serious real time
-> constraint that a standard Linux kernel is not going to provide.
-
-I don't need to send a feedback, but how I described, I need the data pretty
-was after the eye-movement occurred.
I hope this helps. I used the real time scheduling queue to get precise timing
for my PhD results. I'm sure that it'll help you in your application.
BAJ
That involves three issues. One is how fast can you schedule
the timer to be called. For example, anything which causes the
process to give up its time slice between calls to the timer
makes it impossible to schedule the timer sooner than the HZ
tick rate. Hence, 10 ms is the smallest time you'll see when
that happens. Your code will necessarily do exactly that every
time read() is called if it blocks waiting for input data.
The second issue, which we can ignore in this case, is the
resolution of the timer. The gettimeofday() function has 1
microsecond resolution.
The third issue is the granularity of the timer. I'm not
positive about gettimeofday(). What I read indicates that it
might not always be the same, and when I've set up test code, it
does in fact vary from one iteration to another. However, in
this case it doesn't matter because of the way you are using it.
Your implementation of a timer has 1 *millisecond* granularity,
not 1 microsecond.
// Start timer
gettimeofday( &tv, NULL );
t1 = tv.tv_sec*1000 + tv.tv_usec/1000;
^^^^^^
...
// get the time the function needed to run.
gettimeofday( &tv, NULL );
t2 = tv.tv_sec*1000 + tv.tv_usec/1000; /* ms */
^^^^^
t = t2 - t1;
printf("delay: %d ms\n\n", t );
You've got 1 millisecond granularity.
I just set up a loop, reading from stdin, and timed calls to
read(0, buf, 24), and got values ranging as low as 7
microseconds up to a few dozen depending on what else the system
was doing at the moment.
Using your 1ms granularity they would all have displayed as 0ms.
That is the time it takes read() to fetch data already buffered.
Clearly, at 38,400 bps it takes significantly longer than 7
microseconds to transmit and receive 24 bytes, hence a loop that
iterates read() until 24 bytes is received may take anything
from 7 microseconds (maybe less) on up to the time it takes 24
bytes to be sent at the bit rate of the serial port, which is
6.25 ms at 38.4Kbps.
>> But that just points out that you don't really need to read the
>> data every 8 ms. You could read the data every 16 ms or every
>> 32 ms and be just fine. You just get twice or four times as
>> much data with each read that way.
>
>No I can't the reason comes later...
>
>> My understanding of what Jens said earlier (and he can correct
>> me if I'm wrong here, because I might well be) is that he is
>> graphing the data for display, and he believes that if he
>> doesn't grab the data as it becomes available there will be
>> buffer overflows and he will lose the data.
>>
>...basically right I want to display something on the monitor.
>The reason why I have to read the data when it arrives is pretty easy.
>I wrote earlier that I have to detect saccades. I'm doing this with a
>5 point differentiator. Because I'm doing this online this means that I take
>the current point and use the 4 data frames I got before. You see here is
>already a delay of more than 2 frames for the detection.
You have data that is produced 32 ms in the past, at the
earliest (4 frames), but is arriving 1 packet every 8.333 ms.
You are displaying this on a monitor that displays on the screen
at roughly a 72Hz rate (the vertical refresh rate of your
monitor) or about every 14ms.
The difference between those two rates *is* a problem, but not a
problem in data retrieval so much as in data display!
>When I'm reading the data every 32 milliseconds, and the subject starts to
>make a saccadic eye-movement after 8 ms, the possibility to detect the
>saccade and to react gets pretty small. This is why it's important to get
>every frame of data as soon as possible.
That is true if and *only if* you do something with the data
that will affect future data. If there is no feedback
mechanism, you merely need to record the data.
And if all you need to do is record changes (frequency,
magnitude, length, latency, whatever), you can actually make
those determinations not just milliseconds later, but months
later. That is because you've already quantized the data at
intervals. The requirement is not to get the data quickly, but
to guarantee you get *all* of the data.
>> However, if this data is being *used* for something that is time
>> critical, then yes there is a problem. For example, if you need
>> to read the data, and send feedback to the device *before* it
>> sends another packet of data... that is a serious real time
>> constraint that a standard Linux kernel is not going to provide.
>
>I don't need to send a feedback, but how I described, I need the data pretty
>was after the eye-movement occurred.
Why? What do you *do* with the data that makes it time
critical? So far you've only mentioned displaying the data,
which does *not* make obtaining the data time critical. You
could save it on a disk and display it next year.
If you want to display it, and that is all, then your problem is
not one of latency for data retrieval. Rather, your problem is
matching the framing rate of your display to the data intervals
without losing data changes. That is a data *output* problem!
It is not a trivial problem either! Consider that at 120Hz you
are going to gather 120 data samples in 1 second, one every
8.333 ms. However, your monitor has a vertical refresh rate of
something between 60 Hz and 90Hz, so lets pick 72 Hz as an
example and look at it. In 1 second your monitor can only
display 72 different samples, but you have 120 of them!
Higher latency, and jitter debouncing is probably what you'll
need to do. The higher the latency (the delay between when the
movement occurs physically and when it is displayed on the
screen), the smoother you can massage the data. For example, if
you are willing to buffer one whole second worth of data, you
can use a FIFO buffer and an optimizer that works on the entire
data set to reduce the number of actual samples from 120 to 72
between the time data arrives and the time data is displayed.
That can be done by reducing large blocks of same size data to
smaller blocks of same size data (e.g., if no movement occurs
for 5 packets, display only 3 of them).
Obviously that particular example is applicable only if you are
not interested in the time between movements, but are interested
in the magnitude or other properties of the movements. You'll
have to pick whatever form of optimization retains the
information you are interested in.
However, I would also point out that event that discussion is a
bit ridiculous! Your monitor might be able to display 72 data
samples per second, but the human eye cannot distinguish that
many. I'm not sure just what rate can be distinguished, but it
is certainly less that the 24 fps used for motion pictures.
Clearly I think what you'll really need is not a direct display
of the eye movement, but a display of some for of analysis of
that eye movement. The actually display will almost certainly
need to be highly amplified to be apparent to anyone viewing
the monitor.
Yes. Any sleep function will give up the process's current time
slice, and that process then will not be rescheduled until at
least the minimum HZ time interval has passed.
The effect is that any sleep will have a granularity of 10ms.
Yes, but your results (even though they are accurate) are probably not
valid! :-)
It depends on how you coded TIMER_GET. If you saved the values in
an array, and analyzed them outside of the for loop shown above,
they are valid. Otherwise...
>Here's a quick look at the sleeps [usleep, and nanosleep].
[huge snip of stuff that is interesting an accurate]
>-> But that just points out that you don't really need to read the
>-> data every 8 ms. You could read the data every 16 ms or every
>-> 32 ms and be just fine. You just get twice or four times as
>-> much data with each read that way.
>-
>-No I can't the reason comes later...
>
>I do understand. You are correct.
I don't think so... :-)
Whether the data is collected in real time, or not, just isn't
going to change what happens later. Whether that is 32ms later
or 32 days later makes no difference if the data is being fed to
a display device that updates the screen every 14ms instead of
less than every 8 ms. And, regardless of whether it did or not,
no human can actually *see* data on the screen being changed
that fast!
Note that his input timing is *not* what determines the data
quantization intervals. That is predetermined by the device
connected to the serial port. As a result, how accurate the
timing of data collection is makes absolutely no difference in
what the data is. In fact, the data is stored a 4k byte buffer
by the serial port device driver, and it would clearly be
possible to grab chunks out of that buffer that are at least up
to 1 whole second apart, and it would not affect the data
integrity in the slightest.
Displaying the data is a whole different bag of worms though,
and nothing relating to data input is going to affect that.
>> Moreover, if that is the case then the only sane way to write a
>> serial_check_buffer() function is to include a minimal sleep, which will
>> cause 10ms of that timing granularity to be given to other processes!
>
> Correct again. sleep has a grainularity of 10ms.
If you're talking about the libc call sleep(), it has a granularity of 1sec.
> But my question is now does read and/or select have the same grainularity?
No. The task can become runnable after a read/select call at any point.
That's largely a moot point, since the task won't actually _run_ until the
next time the scheduler runs, and that's at the next 10ms interrupt.
OK, there _is_ (or at least was, the last time I looked) a way for a driver
to wake up somebody blocked on a a read and then make sure that the
scheduler runs as soon as the driver tasklett completes. That way the 10ms
scheduler granularity can be bypassed. That's fine as long as only one
driver does it once in a while. If all the drivers tried to do it, it could
cause a lot of overhead.
> What about usleep or nanosleep?
Same answer. It would be _possible_ to impliment them in such a way that a
task becomes runnable with microsecond granularity, but under "normal"
conditions, the state of a task doesn't matter until the next 10ms interrupt
causes the scheduler to run.
> If so then I can see why Jens is upset.
He seems to be "upset" because Linux is not an RTOS. Nobody ever claimed it
was. If you beat on it enough, you can get it closer than the usual
"default" state of things.
> -> But the timer can't measure in microseconds, and the second data
> -> _read_ happens literally microseconds (all within the same
> -> time slice) after the first one. Hence the timer says there was
> -> 0ms between them, because its smallest granularity is 10ms.
>
> Not exactly correct. You sleep for 10ms if no data is there, and you
> don't sleep at all on the read if data is there.
>
> Note from the above run you don't want to use select. You want to keep running
> track of the last time you did a read, sleep until 8333 uS after the last read
> then read again. The difference with the real-time schedule is that as soon
> as the 8333 uS elapses, the kernel will immediately preempt whatever is running
> and restart your process.
If you want to try that, you'd better do a couple other things as well:
1) Disable the FIFO on your UART so you get an interrupt for every byte.
It'll increase overhead, but it'll play hell with your timings
otherwise.
2) Set the low-latency flag on the device. By default, the serial driver
buffers up rx bytes and only pushes them up to the tty line discipline
layer once every (you guessed it!) 10ms. Setting the low-latency flag
will send the bytes up on every interrupt.
> Correct. So make the process run on the real time scheduler, Make sure that
> it usleeps when it's waiting, and it'll wake up right on time.
But if you don't set up the serial driver correctly, there probably won't be
any data there when you wake up: it'll be sitting int he UART Rx FIFO or in
the serial driver's receive buffer.
> I hope this helps. I used the real time scheduling queue to get precise
> timing for my PhD results. I'm sure that it'll help you in your
> application.
--
>> Of course I realized that it takes the screen at least 14ms to update a
>> picture. But this is one reason more to care about the few milliseconds I
>> can save somewhere. Since I need 5 frames to analyze the data the 2 ms
>> become 10 milliseconds delay.
>
> But no! You'll get th 1st frame 2ms (more or less) after it appeared
> and also the 5th and the 500th. You mustn't add the delay.
Yes of course you're right. Don't know what I had in mind when I wrote.
> The main timeslice you have to look at is the framerate of your video
> device. Say it updates with 100hz, that means there are slices of
> 10ms. All you can say is: from the time you have processed a set of
> data until the change gets visible there is a delay of 0 - 10ms. At
> least that's all you can say in a non-realtime environment.
>
> Or do I miss something?
>
>>>> Just a short question about soft-real-timer. Can't I solve the
>>>> problem with this posix function?
>
> I don't know this function. Can You give me a hint?
Found a thread about this here:
http://groups.google.com/groups?q=soft-real-time+linux&hl=en&lr=&ie=UTF-8&oe
=UTF-8&selm=4lf54tkek257t6sefch6v4jq494l89ifa2%404ax.com&rnum=6
Jens
>> Since I need 5 frames to analyze the data
>
> True.
> BUT:
>
> 1) There is no timing frame involved in analyzing:
> You can do the same analysis with data replayed from the harddisk
> Or with real data.
>
> 2) After you have syncronized and got at least 5 frames, all screen-
> Update will be instanous to your calculation.
>
>
> Just use a standard serial Port with interrupt enable'd. You have
> (up to) 16 Byte buffer in the 16550 and nearly 4K in the standard
> kernel-Driver. If that should not be sufficient, go for another
> PCI-serialboard (in Norderstedt, less then 100 km from your town :-)
> with a larger on-Uart-Fifo. And/or change the kernel-module for
> a larger buffer...
What I experienced until now I don't get problems with the buffer.
So no need for a change of a kernel module or a visit in Norderstedt ;)
> And just read away those Bytes as they arrive, process them and
> do not even think of any "timer"; your screen will be in sync with
> the data-rate with a non-noticeable delay of 5*8 msec (less then
> 50 msec!)
I think I will go this way. When I finished to make the communication
running In a thread I will measure the overall delay I get and see if the
system is fast enough. But should be. Just have some Problems at the moment
with the blocked read. I haven't set the non-block flag, but I still get the
EAGAIN error message which says I run in non-blocking mode.
> Good luck!
Thank you.
Jens
No. I meant the grainularity of being rescheduled after blocking in the
kernel. usleep, nanosleep, and select have fine enough grainularity to
expose this.
-
-> But my question is now does read and/or select have the same grainularity?
-
-No. The task can become runnable after a read/select call at any point.
-That's largely a moot point, since the task won't actually _run_ until the
-next time the scheduler runs, and that's at the next 10ms interrupt.
As you point out it is moot.
-
-OK, there _is_ (or at least was, the last time I looked) a way for a driver
-to wake up somebody blocked on a a read and then make sure that the
-scheduler runs as soon as the driver tasklett completes. That way the 10ms
-scheduler granularity can be bypassed. That's fine as long as only one
-driver does it once in a while. If all the drivers tried to do it, it could
-cause a lot of overhead.
And of course the other method I pointed out.
-
-> What about usleep or nanosleep?
-
-Same answer. It would be _possible_ to impliment them in such a way that a
-task becomes runnable with microsecond granularity, but under "normal"
-conditions, the state of a task doesn't matter until the next 10ms interrupt
-causes the scheduler to run.
Correct. So we need an abnormal circumstance.
-
-> If so then I can see why Jens is upset.
-
-He seems to be "upset" because Linux is not an RTOS. Nobody ever claimed it
-was. If you beat on it enough, you can get it closer than the usual
-"default" state of things.
Agreed. There's not apparent reason (note the word apparent here) why the
scheduler is only run at 10ms intervals. He frustrated because it doesn't
meet his expectation that if he asks to sleep for 800 uS that he doesn't
come back online for up to more than 10 times that required timeframe.
-
-> -> But the timer can't measure in microseconds, and the second data
-> -> _read_ happens literally microseconds (all within the same
-> -> time slice) after the first one. Hence the timer says there was
-> -> 0ms between them, because its smallest granularity is 10ms.
->
-> Not exactly correct. You sleep for 10ms if no data is there, and you
-> don't sleep at all on the read if data is there.
->
-> Note from the above run you don't want to use select. You want to keep running
-> track of the last time you did a read, sleep until 8333 uS after the last read
-> then read again. The difference with the real-time schedule is that as soon
-> as the 8333 uS elapses, the kernel will immediately preempt whatever is running
-> and restart your process.
-
-If you want to try that, you'd better do a couple other things as well:
-
- 1) Disable the FIFO on your UART so you get an interrupt for every byte.
- It'll increase overhead, but it'll play hell with your timings
- otherwise.
-
- 2) Set the low-latency flag on the device. By default, the serial driver
- buffers up rx bytes and only pushes them up to the tty line discipline
- layer once every (you guessed it!) 10ms. Setting the low-latency flag
- will send the bytes up on every interrupt.
Can you set the FIFO level? I think the driver generally sets it up to
interrupt when it fills with 14 bytes (I think).
Also is there an ioctl that forces a FIFO flush even though an interrupt hasn't
occured yes? I think that would be better than disabiling the FIFO and
generating an interrupt each and every received byte.
-
-> Correct. So make the process run on the real time scheduler, Make sure that
-> it usleeps when it's waiting, and it'll wake up right on time.
-
-But if you don't set up the serial driver correctly, there probably won't be
-any data there when you wake up: it'll be sitting int he UART Rx FIFO or in
-the serial driver's receive buffer.
Cool beans. I never looked at serial.c that closely.
BAJ
Unless you put it on a real time scheduling queue.
-
-The effect is that any sleep will have a granularity of 10ms.
Under normal circumstances.
-
->I'm interested now, so I guess I'll spend a couple of minutes setting up
->some experiments. I'll be back. OK I'm back with the first test. I have this
->loop:
->
->-------
-> for(i=0;i<10;i++) {
-> delay.tv_sec = 0;
-> delay.tv_usec = 100;
-> select(0,NULL,NULL,NULL,&delay);
-> TIMER_GET(ticks[i]);
-> }
->------------
->TIMER_GET is a macro that simply marks the time with gettimeofday. I have
->another routine that converts into microseconds from a start time. Here are
->the results:
->
->0: 2533
->1: 12367
->2: 22483
->3: 32533
->4: 42552
->5: 52388
->6: 62441
->7: 72364
->8: 82694
->9: 92360
->
->So there is no doubt that even though we told select to delay 100 uS, that
->it took 10 ms for it to get back to us. Bummer.
-
-Yes, but your results (even though they are accurate) are probably not
-valid! :-)
-
-It depends on how you coded TIMER_GET. If you saved the values in
-an array, and analyzed them outside of the for loop shown above,
-they are valid. Otherwise...
The former. Collected in the loop, analyzed after the loop completed.
-
->Here's a quick look at the sleeps [usleep, and nanosleep].
-
- [huge snip of stuff that is interesting an accurate]
-
->-> But that just points out that you don't really need to read the
->-> data every 8 ms. You could read the data every 16 ms or every
->-> 32 ms and be just fine. You just get twice or four times as
->-> much data with each read that way.
->-
->-No I can't the reason comes later...
->
->I do understand. You are correct.
-
-I don't think so... :-)
-
-Whether the data is collected in real time, or not, just isn't
-going to change what happens later. Whether that is 32ms later
-or 32 days later makes no difference if the data is being fed to
-a display device that updates the screen every 14ms instead of
-less than every 8 ms. And, regardless of whether it did or not,
-no human can actually *see* data on the screen being changed
-that fast!
I get your point, to a point. 14 ms is the Just Noticable difference of the
update of the display. However by not collecting and analyzing data as it comes
in, there is an additional latency that is added, latency that will add some
data slip as it beats against the 14ms video update. In short, Jens needs to
collect every 8ms to ensure that when that 14ms video update occurs, it
updates with the latest collected information.
So I believe that he still has the application correct: collect the data from
the serial port as close to real time as you can and update the display as
quickly as possible. Even though the display is the slower process, I can see
no good reason to keep it updated with as up to date information as can be
collected.
Let's make a concrete example so we can all get on the same page (time is
measured in milliseconds):
24 byte packet is ready from the serial port at time X. The video display
updates at time X+1, and the normal 10ms read/select/usleep timeout occurs
at X+2. The question is it critical to collect the data from the serial port
and update the video before time X+1? Jens seems to think so, and only because
it is easy to do (by running in a real time scheduling queue) I would agree.
Even though the next packet, which comes in at time X+8 won't see an update
until the next time the video updates (X+15), it's better the collect it
immediately instead of waiting until X+12, the 10ms timeout, because there will
be times where the collection and the video update can occur before the 10ms
timeout comes along.
OTOH if it were difficult to get processes to wake up in a timely fashion,
an assumption which started this thread, then I'd be inclined to agree with
Floyd, it probably doesn't matter if you video is at most one frame behind
the current data.
So in the end, it seems like it's much ado about nothing.
-
->-> My understanding of what Jens said earlier (and he can correct
->-> me if I'm wrong here, because I might well be) is that he is
->-> graphing the data for display, and he believes that if he
->-> doesn't grab the data as it becomes available there will be
->-> buffer overflows and he will lose the data.
->->
->-...basically right I want to display something on the monitor.
->-The reason why I have to read the data when it arrives is pretty easy.
->-I wrote earlier that I have to detect saccades. I'm doing this with a
->-5 point differentiator. Because I'm doing this online this means that I take
->-the current point and use the 4 data frames I got before. You see here is
->-already a delay of more than 2 frames for the detection.
->
->Right. So as you pointed out from the beginning, it's a soft real time process.
->
->-When I'm reading the data every 32 milliseconds, and the subject starts to
->-make a saccadic eye-movement after 8 ms, the possibility to detect the
->-saccade and to react gets pretty small. This is why it's important to get
->-every frame of data as soon as possible.
->
->Correct. So make the process run on the real time scheduler, Make sure that
->it usleeps when it's waiting, and it'll wake up right on time.
->
->-> However, if this data is being *used* for something that is time
->-> critical, then yes there is a problem. For example, if you need
->-> to read the data, and send feedback to the device *before* it
->-> sends another packet of data... that is a serious real time
->-> constraint that a standard Linux kernel is not going to provide.
->-
->-I don't need to send a feedback, but how I described, I need the data pretty
->-was after the eye-movement occurred.
->
->I hope this helps. I used the real time scheduling queue to get precise timing
->for my PhD results. I'm sure that it'll help you in your application.
-
-Note that his input timing is *not* what determines the data
-quantization intervals. That is predetermined by the device
-connected to the serial port. As a result, how accurate the
-timing of data collection is makes absolutely no difference in
-what the data is. In fact, the data is stored a 4k byte buffer
-by the serial port device driver, and it would clearly be
-possible to grab chunks out of that buffer that are at least up
-to 1 whole second apart, and it would not affect the data
-integrity in the slightest.
-
-Displaying the data is a whole different bag of worms though,
-and nothing relating to data input is going to affect that.
Not the data integrity, but the syncronization of the data input to the screen
update. Jens wants these two bound together. I agree with you that it doesn't
matter since the screen is slower than both the input stream and the standard
kernel quantization. But there is a soft real time relationship between the
two.
BAJ
> Agreed. There's not apparent reason (note the word apparent here) why the
> scheduler is only run at 10ms intervals. He frustrated because it doesn't
> meet his expectation that if he asks to sleep for 800 uS that he doesn't
> come back online for up to more than 10 times that required timeframe.
Then he needs to understand that he's running a non-privileged process
on a general purpose OS. He'd be pretty pissed if some other process managed
to interrupt his process every 800 uS, but that's what he wants his process
to do to every other process.
DS
Not if he wants his data to display every change, which is what he
stated his goal was. (Regardless of the fact that no human will
be able to notice that change...)
However, if what you are stating above is in fact what he wants,
which is to make sure the latest data is shown when that 14 ms
interval on the video update comes around... then he doesn't
need to be concerned with real time data input at all, and he
doesn't need to be concerned about dropping a data packet, or
about the latency jitter between packets. In that case he can
just do a loop on his read(), and send whatever he gets to the
video. He will end up overwriting what is there if jitter gives
him first a too long interval and then a too short interval.
>So I believe that he still has the application correct: collect the data from
>the serial port as close to real time as you can and update the display as
>quickly as possible. Even though the display is the slower process, I can see
>no good reason to keep it updated with as up to date information as can be
>collected.
Why go to all the bother, when not doing it is easier and the
result is the same. If he does that, he collects 120 samples
per second and he then provides the display with 72 of them. If
he doesn't do that, he still collects 120 samples, but he gives
the display all 120 of samples but the display never gets around
to displaying more than 72 (the rest are overwritten with new
samples before they are ever displayed).
There simply is no difference in the end result. He displays
only 72 out of 120 samples. And *nothing* is going to change
that short of either a different serial device providing the
samples or a faster video device displaying them. Worse yet,
there is no way to avoid such a problem short of synchronizing
the serial device and the display device to each other!
>Let's make a concrete example so we can all get on the same page (time is
>measured in milliseconds):
>
>24 byte packet is ready from the serial port at time X. The video display
>updates at time X+1, and the normal 10ms read/select/usleep timeout occurs
>at X+2. The question is it critical to collect the data from the serial port
>and update the video before time X+1? Jens seems to think so, and only because
>it is easy to do (by running in a real time scheduling queue) I would agree.
Doing so is more complex than not doing so, and there is
absolutely no benefit, or even difference, in the results.
>Even though the next packet, which comes in at time X+8 won't see an update
>until the next time the video updates (X+15), it's better the collect it
>immediately instead of waiting until X+12, the 10ms timeout, because there will
>be times where the collection and the video update can occur before the 10ms
>timeout comes along.
That will still happen anyway. He simply *cannot* display 120
individual samples per second with a system that updates the
display 72 times per second. Whether the frame slip occurs at
X+1 or at X+9 makes no difference.
>OTOH if it were difficult to get processes to wake up in a timely fashion,
>an assumption which started this thread, then I'd be inclined to agree with
>Floyd, it probably doesn't matter if you video is at most one frame behind
>the current data.
>
>So in the end, it seems like it's much ado about nothing.
In fact, if video output is all this is doing, this entire
thread has *only* been ado about nothing. At 20 frames per
second nobody would be able to see a change, so what is the
concern about real time collection of data at 120 frames per
second if 48 arbitrary frames are dropped and never displayed,
plus when the display frame rate is more than 3 times as fast as
a human eye can follow anyway?
Basically he is munging his data by being concerned about real
time data acquisition. What he should be doing is just using
standard methods for downloading the data and then spending his
creative efforts at processing that data to compress it by
dropping only redundant information rather than allowing data to
disappear arbitrarily.
...
>-Displaying the data is a whole different bag of worms though,
>-and nothing relating to data input is going to affect that.
>
>Not the data integrity, but the syncronization of the data input to the screen
>update. Jens wants these two bound together. I agree with you that it doesn't
>matter since the screen is slower than both the input stream and the standard
>kernel quantization. But there is a soft real time relationship between the
>two.
He wants them bound together because he mistakenly believes that
it affects data integrity. He not only stated that, but there
would be no other reasonable reason for doing so. The problem is
that binding them has the opposite effect, and causes a loss of
data integrity.
The "soft real time relations" might be interesting, but it's
nothing but a red herring that keeps his attention on the wrong
aspect.
Post the code you are using to open the serial port, and the
code you are using to configure it.
There are some very good examples of how that should be done
available on the Internet, but there are even more really bad
examples. The Linux Serial-Programming-HOWTO is one of the less
than quality examples...
> -
> -> If so then I can see why Jens is upset.
> -
> -He seems to be "upset" because Linux is not an RTOS. Nobody ever claimed it
> -was. If you beat on it enough, you can get it closer than the usual
> -"default" state of things.
>
> Agreed. There's not apparent reason (note the word apparent here) why the
> scheduler is only run at 10ms intervals. He frustrated because it doesn't
> meet his expectation that if he asks to sleep for 800 uS that he doesn't
> come back online for up to more than 10 times that required timeframe.
When you call a timer function or better lets say you call the function
usleep and in man is written:
"The usleep() function suspends execution of the calling process until
either microseconds microseconds have elapsed or a signal is delivered to
the process and its action is to invoke a signal-catching function or to
terminate the process. System activity may lengthen the sleep by an
indeterminate amount."
Of course I'm frustrated that I don't get the results I thought I would get.
> -Whether the data is collected in real time, or not, just isn't
> -going to change what happens later. Whether that is 32ms later
> -or 32 days later makes no difference if the data is being fed to
> -a display device that updates the screen every 14ms instead of
> -less than every 8 ms. And, regardless of whether it did or not,
> -no human can actually *see* data on the screen being changed
> -that fast!
Just to end this display the data on a monitor discussion...if I just want
to display the Data on the Monitor 14ms would be more than enough.
Here comes what the program should do.
1) Wait until 5 Frames of data are available.
2) analyze the available data to detect fast eye-movements etc.
3) Get the next frame of date
4) Use this frame and the previous 4 frames to do step 2)
And if a fast eye-movement was detected, than change something on the
screen.
I will definitely take a look at the soft-real-time functions. It's not much
effort and the results are looking promising for my purpose.
This worked basically, but when I had to measure the latencies of the single
steps I met the well know "problem".
Another issue is the blocked read.
fd = open(portName, O_RDWR | O_NOCTTY);
Basically this function call is not setting the non-blocked mode and since
blocked is default, read() should make a blocked read when called, right?
But actually it doesn't. It returns directly with 0 and the read function is
called again since I haven't read 24 bytes yet. This happens around 10635
times for every frame I wait for.
And I described earlier that around every 6th frame the delay is around 30us
instead of 10000us. But the data in this frame is not valid. I get wrong
results. And after this frame the read gets somehow confused and returns me
-1073743744 after the attempt to read the next byte.
And now it comes...this all doesn't happen when I check the buffer first if
there is one byte available, before I try to read the first byte of the
frame.
Jens
> There are some very good examples of how that should be done
> available on the Internet, but there are even more really bad
> examples. The Linux Serial-Programming-HOWTO is one of the less
> than quality examples...
OK here comes my code...thought it is a good one because it is recommended a
lot.
int serial_open(char *portName)
{
int fd = 0; // file descriptor
/* Close file if already open */
if (fd > 0) {
close(fd);
}
/* open the specified serial port in for read and write O_RDWR */
/* NO_CTTY makes it a non-controlling TTY. */
fd = open(portName, O_RDWR | O_NOCTTY );
if (fd < 0) {
perror(portName);
return -1;
}
/* Get current port configuration */
tcgetattr(fd, &(oldtio));
cfsetispeed(&newtio, B38400);
cfsetospeed(&newtio, B38400);
/* We've got baudrate. Add other options. */
newtio.c_cflag |= CS8 | CLOCAL | CREAD;
newtio.c_iflag = IGNPAR;
newtio.c_oflag = 0;
newtio.c_cc[VTIME] = 0;
newtio.c_cc[VMIN] = 10;
/* try and configure the port appropriately... */
tcflush(fd, TCIFLUSH);
tcsetattr(fd, TCSANOW, &newtio);
serial_flush (fd);
return fd;
}
Thanks, Jens
> But actually it doesn't. It returns directly with 0 and the read function
is
> called again since I haven't read 24 bytes yet. This happens around 10635
> times for every frame I wait for.
If 'read' really does return zero, then the descriptor is hosed. You
should not call 'read' on it again but instead 'close' it and, perhaps, try
to re-open it.
DS
>> But actually it doesn't. It returns directly with 0 and the read function
> is
>> called again since I haven't read 24 bytes yet. This happens around 10635
>> times for every frame I wait for.
>
> If 'read' really does return zero, then the descriptor is hosed. You
> should not call 'read' on it again but instead 'close' it and, perhaps, try
> to re-open it.
Don't think so, read just returns 0 when there is 0 bytes available. And
after a few (ok 9000) attempts to read it actually get something to read.
So I don't think it has something to do with the descriptor.
Jens
Right after you set it to 0??? That doesn't make sense. I
assume this is an artifact (a broken bone, left laying around)
of doing some editing. :-) You probably had a global fd at some
point?
> /* open the specified serial port in for read and write O_RDWR */
> /* NO_CTTY makes it a non-controlling TTY. */
> fd = open(portName, O_RDWR | O_NOCTTY );
>
> if (fd < 0) {
> perror(portName);
> return -1;
> }
>
> /* Get current port configuration */
> tcgetattr(fd, &(oldtio));
The extra parens around oldtio are not needed.
I assume you have defined both oldtio and newtio globally? You
need to also initialize newtio.
newtio = oldtio;
> cfsetispeed(&newtio, B38400);
> cfsetospeed(&newtio, B38400);
>
>
> /* We've got baudrate. Add other options. */
> newtio.c_cflag |= CS8 | CLOCAL | CREAD;
This does not clear whatever might happen to be already set. You
need to just do '=', not '|='
> newtio.c_iflag = IGNPAR;
You probably want to set IGNBRK here too.
> newtio.c_oflag = 0;
>
> newtio.c_cc[VTIME] = 0;
> newtio.c_cc[VMIN] = 10;
The above two lines are ambiguous, because you have not set the
c_lflag bits. If they just happen to have the ICANON flag set,
then those lines mean nothing. If the ICANON flag is not set,
you've just set up raw input which will have no interbyte timer
and will block until 10 bytes are read.
That is definitely going to cause some erratic behavior!
You probably should do this:
newtio.c_cc[VTIME] = 1;
newtio.c_cc[VMIN] = 0;
newtio.c_lflag = 0;
This will enable raw input, and it will cause a read() to block for
as long as 100 milliseconds with no data available (clearly much longer
than your data would normally ever take unless something is wrong). It
will return instantly if a single byte of data is available.
You also need one more setting in here.
#if __linux__
#include <sys/ioctl.h> /* define N_TTY */
tty.c_line = N_TTY; /* set line discipline */
#endif
The reason for using a conditional for __linux__ is because while
that is allowed by POSIX, it is Linux specific. The conditional
makes your code portable.
> /* try and configure the port appropriately... */
> tcflush(fd, TCIFLUSH);
> tcsetattr(fd, TCSANOW, &newtio);
Rather than the above two, just do
tcsetattr(fd, TCSAFLUSH, &newtio);
If you really want to get pedantic about it, the _right_ way to do that
is,
#include <string.h>
struct termios newtio, settio;
... /* set everything in newtio */
/* set the newtio, the fetch them back in settio */
if (tcsetattr(fd, TCSADRAIN, &newtio) || tcgetattr(fd, &settio)) {
return -1;
}
/* verify the changes were actually made */
return memcmp(&newtio, &settio, sizeof settio) ? -1 : 0;
> serial_flush (fd);
Whatever that is, it isn't necessary.
return fd;
It still makes no differences. Your analysis below must take
all of 1 microsecond to complete. You can probably do it 2000
times between each frame of data! You simply have no need to be
concerned about real time data acquisition, and that is because
time quantization and interval are determined by your measuring
device, not by the serial port.
>Here comes what the program should do.
>1) Wait until 5 Frames of data are available.
>2) analyze the available data to detect fast eye-movements etc.
>3) Get the next frame of date
>4) Use this frame and the previous 4 frames to do step 2)
>
>And if a fast eye-movement was detected, than change something on the
>screen.
>
>I will definitely take a look at the soft-real-time functions. It's not much
>effort and the results are looking promising for my purpose.
A total waste of time.
>This worked basically, but when I had to measure the latencies of the single
>steps I met the well know "problem".
If you would cease your concern about "real time", you'll find
that it simply has nothing to do with your actual needs. As has
been noted, you can store the data you are getting for a year on
your disk, and then do the analysis and display. Real time data
acquisition simply has nothing to do with it.
>Another issue is the blocked read.
>
>fd = open(portName, O_RDWR | O_NOCTTY);
>
>Basically this function call is not setting the non-blocked mode and since
>blocked is default, read() should make a blocked read when called, right?
While writing this, I saw (and responded) to the post with your
code. I wrote the following before that was posted, so I didn't
include a full description of how it works in the followup to
that article.
The following will explain why your reads are doing what they are.
>But actually it doesn't. It returns directly with 0 and the read function is
>called again since I haven't read 24 bytes yet. This happens around 10635
>times for every frame I wait for.
That is not actually "nonblocking" mode. If it were in
nonblocking mode it would return -1 and errno would be set to
EAGAIN.
What you have is raw mode (which doesn't necessarily block
either, but the way it works is different) as opposed to
canonical mode.
You can adjust the way it works by setting struct termios
members c_cc[VMIN] and c_cc[VTIME], but it is a bit confusing as
to just how it works.
If VTIME is set to a non-zero value, it functions as an
interbyte timer, except for the special case when VMIN is set to
0. If VMIN is set to 0 and VTIME is non-zero, VTIME acts as a
read() timeout, and read() will return if either VTIME
deciseconds (100 milliseconds) elapses with no data available,
or if a single byte is received. (Note that read() does not
block completely if no data is available.)
If VTIME is non-zero and VMIN is non-zero, VTIME is an
*inter*-byte timer. If either VMIN number of bytes are recieved
or if VTIME deciseconds elapses between bytes, read() will
return. (Note that read() blocks until at least one byte is
available.)
If VTIME is zero and VMIN in non-zero, the timer is disabled,
and read() will block until VMIN bytes are available.
If VTIME and VMIN are both 0, read() the timer is disabled and
read() will return immediately if no data is available.
>And I described earlier that around every 6th frame the delay is around 30us
>instead of 10000us. But the data in this frame is not valid. I get wrong
>results. And after this frame the read gets somehow confused and returns me
>-1073743744 after the attempt to read the next byte.
You are losing frame sync. The significance of the 6th frame is
that (1/HZ - 1/120) times 6 is exactly equal to 1/HZ. The problem
is that 10 byte block read that you are doing.
Is your data framed in any way that flags which bytes are which
in the packet of 24? Or is the interval of 2ms between packets
the only way to know when a new packet is starting?
Is there any chance you could post a description of the data packets?
>And now it comes...this all doesn't happen when I check the buffer first if
>there is one byte available, before I try to read the first byte of the
>frame.
Ah, the advantages of asynchronous input... which you are
*losing* by trying to treat it as isochronous.
Just think how it would work... if you went back to that test
for one byte, but instead of using the ioctl FIONREAD command,
you used a blocking read in noncanonical mode with VMIN set to 1
and VTIME set to 0. It simply will not return until there is a
byte available, and then it will fetch just that one byte. Then
you can set VMIN to 23 and VTIME to 1, and read in the rest of
your 23 byte packet.
Here is an example of how to configure a serial port.
/*
* configure the serial port, returns 0 on success,
* non-zero otherwise (may be either positive or negative)
*
* hardware flow control, 8n1, full duplex, and
* single character raw i/o with blocking enabled.
*/
#include <termios.h>
#include <string.h>
#include <sys/ioctl.h> /* defines N_TTY */
#define CFLAGS (CS8 | CLOCAL | CREAD)
#define IFLAGS (IGNBRK | IGNPAR)
#define BAUD B38400
int
serial_cnfg(int fd)
{
struct termios tty, stty;
tcgetattr(fd, &tty);
/* raw io, hardware flow control, 8n1 */
tty.c_iflag = IFLAGS; /* input flags */
tty.c_cflag = CFLAGS; /* control flags */
tty.c_lflag = 0; /* local flags */
tty.c_oflag = 0; /* output flags */
tty.c_cc[VMIN] = 1; /* wait for 1 character */
tty.c_cc[VTIME] = 0; /* turn off timer */
#ifdef __linux__
/* for linux only */
tty.c_line = N_TTY; /* set line discipline */
#endif
cfsetospeed(&tty, BAUD); /* set bit rate */
cfsetispeed(&tty, BAUD);
if (tcsetattr(fd, TCSADRAIN, &tty) || tcgetattr(fd, &stty)) {
return -1;
}
/* verify the changes were actually made */
return memcmp(&tty, &stty, sizeof tty);
}
/*
* untested code fragment intended to read a 24 byte packet
* and call an analyzer function to operate on a 5 packet frame.
*/
int
get_data(int fd)
{
unsigned char ibuf[5][24];
int p_count = 0;
struct termios tty;
if (0 > tcgetattr(fd, &tty)) {
return 1;
}
/* flush all accummulated input data */
if (0 > tcsetattr(fd, TCSAFLUSH, &tty)) {
return 1;
}
while (1) {
tty.c_cc[VMIN] = 1;
tty.c_cc[VTIME] = 0;
if (-1 == tcsetattr(fd, TCSANOW, &tty)) {
return 2;
}
/* block until we get one byte */
if (1 > read(fd, &ibuf[p_count][0], 1)) {
return 3;
}
tty.c_cc[VMIN] = 23;
tty.c_cc[VTIME] = 1;
if (-1 == tcsetattr(fd, TCSANOW, &tty)) {
return 2;
}
/* read rest of packet */
if (23 > read(fd, &ibuf[p_count][1], 23)) {
return 4;
}
analyze_frame(p_count, ibuf);
if (5 > ++p_count) {
p_pcount = 0;
}
}
}
/*
* psuedo frame analysis function
*/
void analyze_frame(int p_count, char *fbuf)
{
...
}
WARNING: Not only is that code untested, but I'm up way past
my nap time, and am far to groggy at the moment to notice the
dozen or so obvious errors that must be in what I'm posting.
My applogies.
> If you would cease your concern about "real time", you'll find
> that it simply has nothing to do with your actual needs. As has
> been noted, you can store the data you are getting for a year on
> your disk, and then do the analysis and display. Real time data
> acquisition simply has nothing to do with it.
Just to point it out clearly. What I'm trying to do is eye movement analysis
in specific saccadic eye movement analysis. The base of my experiments is
the detection of fast eye movements in "real-time" and not after a year in
the offline data. Basically the experiments are perception experiment where
a detail in a scene or animation gets changed while a subject makes fast
eye-movements.
>> But actually it doesn't. It returns directly with 0 and the read function is
>> called again since I haven't read 24 bytes yet. This happens around 10635
>> times for every frame I wait for.
>
> That is not actually "nonblocking" mode. If it were in
> nonblocking mode it would return -1 and errno would be set to
> EAGAIN.
Yes, exactly this happened when I set the parameter.
> You can adjust the way it works by setting struct termios
> members c_cc[VMIN] and c_cc[VTIME], but it is a bit confusing as
> to just how it works.
I just found some example code on a page and tried the c_cc[VMIN] and
c_cc[VTIME] Parameter. But didn't changed anything...just to explain why it
is in my code. There are some artifacts around. But now I'll get enlighten,
thanks.
> If VTIME is non-zero and VMIN is non-zero, VTIME is an
> *inter*-byte timer. If either VMIN number of bytes are recieved
> or if VTIME deciseconds elapses between bytes, read() will
> return. (Note that read() blocks until at least one byte is
> available.)
> If VTIME is zero and VMIN in non-zero, the timer is disabled,
> and read() will block until VMIN bytes are available.
>
> If VTIME and VMIN are both 0, read() the timer is disabled and
> read() will return immediately if no data is available.
Haven't found a detailed explanation of VMIN and VTIME like this...but now
it makes sense.
> You are losing frame sync. The significance of the 6th frame is
> that (1/HZ - 1/120) times 6 is exactly equal to 1/HZ. The problem
> is that 10 byte block read that you are doing.
Not sure if It's because the 10 byte block read. Because I just added this
VMIN and VTIME half an hour before I posted the code for testing purpose.
But the same problem appeared before too.
> Is your data framed in any way that flags which bytes are which
> in the packet of 24? Or is the interval of 2ms between packets
> the only way to know when a new packet is starting?
>
> Is there any chance you could post a description of the data packets?
I donšt have the manual here at the moment, but I could post a description
tomorrow if it's useful. But the package basically has one startbit, so I
can identify and sync on this bit.
>> And now it comes...this all doesn't happen when I check the buffer first if
>> there is one byte available, before I try to read the first byte of the
>> frame.
> Just think how it would work... if you went back to that test
> for one byte, but instead of using the ioctl FIONREAD command,
> you used a blocking read in noncanonical mode with VMIN set to 1
> and VTIME set to 0. It simply will not return until there is a
> byte available, and then it will fetch just that one byte. Then
> you can set VMIN to 23 and VTIME to 1, and read in the rest of
> your 23 byte packet.
This is exactly what I tried to do, but this didn't worked due to my poor
serial port setup. But this is the first thing I change tomorrow morning.
> Here is an example of how to configure a serial port.
> ... EXAMPLE ....
>
> WARNING: Not only is that code untested, but I'm up way past
> my nap time, and am far to groggy at the moment to notice the
> dozen or so obvious errors that must be in what I'm posting.
> My applogies.
No need to apologies, this is more than I expected. I was looking for some
good code exampled and explanations, but without really knowing what are
good and what are worse examples it is kind of hard. Thank you for taking a
look at my crappy fragment code. I actually cleaned it a little before I
posted it. Thanks again for the detailed explanation (also in you're code
reply) this is really useful for me and my understanding. Is there a good
reference out where you can point me to? Not sure if I still need it after
this, but maybe for further readings and future projects.
Enjoy your nap, and thanks again.
Jens
You missed one of his points though, Floyd. He munges 5 sets of samples
to detect a certain type of eye movement. So even though every sample won't
affect the display directly, every sample is involved in that type of eye
movement. So it still all needs to be collected.
Jens also points out that he wants to react to that type of eye movement
as soon as it is detectable. This also points to collecting the data as
quickly as possible.
->Let's make a concrete example so we can all get on the same page (time is
->measured in milliseconds):
->
->24 byte packet is ready from the serial port at time X. The video display
->updates at time X+1, and the normal 10ms read/select/usleep timeout occurs
->at X+2. The question is it critical to collect the data from the serial port
->and update the video before time X+1? Jens seems to think so, and only because
->it is easy to do (by running in a real time scheduling queue) I would agree.
-
-Doing so is more complex than not doing so, and there is
-absolutely no benefit, or even difference, in the results.
I've already shown that it isn't any more complex once you put the process
on the real time scheduling queue. It'll wake up right at time X.
-
->Even though the next packet, which comes in at time X+8 won't see an update
->until the next time the video updates (X+15), it's better the collect it
->immediately instead of waiting until X+12, the 10ms timeout, because there will
->be times where the collection and the video update can occur before the 10ms
->timeout comes along.
-
-That will still happen anyway. He simply *cannot* display 120
-individual samples per second with a system that updates the
-display 72 times per second. Whether the frame slip occurs at
-X+1 or at X+9 makes no difference.
I understand that. But given the equal choice to collect sooner or later,
it's probably better to collect sooner.
-
->OTOH if it were difficult to get processes to wake up in a timely fashion,
->an assumption which started this thread, then I'd be inclined to agree with
->Floyd, it probably doesn't matter if you video is at most one frame behind
->the current data.
->
->So in the end, it seems like it's much ado about nothing.
-
-In fact, if video output is all this is doing, this entire
-thread has *only* been ado about nothing. At 20 frames per
-second nobody would be able to see a change, so what is the
-concern about real time collection of data at 120 frames per
-second if 48 arbitrary frames are dropped and never displayed,
Because those 48 frames are still used in eye movement detection even though
they are not displayed.
-plus when the display frame rate is more than 3 times as fast as
-a human eye can follow anyway?
-
-Basically he is munging his data by being concerned about real
-time data acquisition. What he should be doing is just using
-standard methods for downloading the data and then spending his
-creative efforts at processing that data to compress it by
-dropping only redundant information rather than allowing data to
-disappear arbitrarily.
I'm still not clear if the data is redundant or not.
-
-...
-
->-Displaying the data is a whole different bag of worms though,
->-and nothing relating to data input is going to affect that.
->
->Not the data integrity, but the syncronization of the data input to the screen
->update. Jens wants these two bound together. I agree with you that it doesn't
->matter since the screen is slower than both the input stream and the standard
->kernel quantization. But there is a soft real time relationship between the
->two.
-
-He wants them bound together because he mistakenly believes that
-it affects data integrity. He not only stated that, but there
-would be no other reasonable reason for doing so. The problem is
-that binding them has the opposite effect, and causes a loss of
-data integrity.
-
-The "soft real time relations" might be interesting, but it's
-nothing but a red herring that keeps his attention on the wrong
-aspect.
In short: get it to work first, then figure out what if any optimizations are
required.
BAJ
> -There simply is no difference in the end result. He displays
> -only 72 out of 120 samples. And *nothing* is going to change
> -that short of either a different serial device providing the
> -samples or a faster video device displaying them. Worse yet,
> -there is no way to avoid such a problem short of synchronizing
> -the serial device and the display device to each other!
> -
>
> You missed one of his points though, Floyd. He munges 5 sets of samples
> to detect a certain type of eye movement. So even though every sample won't
> affect the display directly, every sample is involved in that type of eye
> movement. So it still all needs to be collected.
Exactly, the data of every single frame is important, because the detection
is based on velocity. If one frame is missing and lets say the velocity is
constant, I get the spatial distance between 3 frames and assume that it's
the difference between 2 frames. From this it follows that the velocity is
as doubled as high as it really is.
> Jens also points out that he wants to react to that type of eye movement
> as soon as it is detectable. This also points to collecting the data as
> quickly as possible.
This is exactly what what I'm aiming at.
> In short: get it to work first, then figure out what if any optimizations are
> required.
This is what I'm going to do now. Thanks for the hints and for being patient
in this, now pretty long, thread.
Jens
Are you saying there is in fact feedback?
Can the person who's eye movement is being measured see the monitor
screen, and is that person expected to react in any way to what is
measured?
Redardless, you are *still* talking about making changes to a screen
faster than the human eye can follow the change.
You *don't* *need* *real* *time* *data* *acquisisiton*.
That isn't going to change.
>> If VTIME is non-zero and VMIN is non-zero, VTIME is an
>> *inter*-byte timer. If either VMIN number of bytes are recieved
>> or if VTIME deciseconds elapses between bytes, read() will
>> return. (Note that read() blocks until at least one byte is
>> available.)
>> If VTIME is zero and VMIN in non-zero, the timer is disabled,
>> and read() will block until VMIN bytes are available.
>>
>> If VTIME and VMIN are both 0, read() the timer is disabled and
>> read() will return immediately if no data is available.
>
>Haven't found a detailed explanation of VMIN and VTIME like this...but now
>it makes sense.
It's in the POSIX specification, which is available online for
browsing at,
Go down the list and click on the line for reading the online
Standard, then fill in the registration information (name and
email address). That gets a page where you click on one more
button and you're at the POSIX Standard. But it is sometimes
hard to find things, as their search facility seems to be either
broken or too simple.
First search for "termios", which will get the man page for
termios (interesting reading in itself). From that, click on a
button for "General Terminal Interface", which for some reason
the search mechanism won't find. Anyway, that has just all
kinds of interesting info that by you'll now probably relate to
very well!
>> You are losing frame sync. The significance of the 6th frame is
>> that (1/HZ - 1/120) times 6 is exactly equal to 1/HZ. The problem
>> is that 10 byte block read that you are doing.
>
>Not sure if It's because the 10 byte block read. Because I just added this
>VMIN and VTIME half an hour before I posted the code for testing purpose.
>But the same problem appeared before too.
The losing sync is probably due to the block read and or other
strangeness in your configuration and the code calling read(),
but the grouping of time blocks is always going to be as shown,
and at every 6th time the scheduler gives you a slice (assuming
the machine is never very loaded) there is going to is going to
be a part of two data packets instead of just one.
>> Is there any chance you could post a description of the data packets?
>
>I donšt have the manual here at the moment, but I could post a description
>tomorrow if it's useful. But the package basically has one startbit, so I
>can identify and sync on this bit.
Do you mean one start byte? Do you mean the first byte is
unique? What value is it? That's all I really need to know.
If it is easy to describe the whole thing though, go ahead and
post it. Or if you can put things on a web page that I can
access, that will work too. (I'll probably but putting several
files on my website later today and will post the URL so that
you can download them at your leisure.
>> Just think how it would work... if you went back to that test
>> for one byte, but instead of using the ioctl FIONREAD command,
>> you used a blocking read in noncanonical mode with VMIN set to 1
>> and VTIME set to 0. It simply will not return until there is a
>> byte available, and then it will fetch just that one byte. Then
>> you can set VMIN to 23 and VTIME to 1, and read in the rest of
>> your 23 byte packet.
>
>This is exactly what I tried to do, but this didn't worked due to my poor
>serial port setup. But this is the first thing I change tomorrow morning.
If you can make significant parts of the code available, we can
work on it.
> I was looking for some
>good code exampled and explanations, but without really knowing what are
>good and what are worse examples it is kind of hard.
That can be a real problem, because there are some pretty bad
code examples out there, and no way to spot the pitfalls unless
you know enough that you don't need the example!
>Is there a good
>reference out where you can point me to? Not sure if I still need it after
>this, but maybe for further readings and future projects.
It is generally figured that "Advanced Programming in a Unix
Environment" (often referred to as just APUE) by the late
W. Richard Stevens (who was a regular poster to Usenet and had a
wonderful style that everyone enjoyed) is a prerequisite to
doing anything serious with UNIX. The book is now a decade old,
and for example does not talk about Linux. There are several
websites where the programming examples can be downloaded,
including patches to make it all run on Linux. Note that there
are a couple other Stevens books which are equally valued, one
is "UNIX Network Programming -- Networking APIs: Sockets and
XTI" Volume 1 2nd Edition, the other is "UNIX Network
Programming -- Interprocess Communications" Volume 2 2nd
Edition.
> -Why go to all the bother, when not doing it is easier and the
> -result is the same. If he does that, he collects 120 samples
> -per second and he then provides the display with 72 of them. If
> -he doesn't do that, he still collects 120 samples, but he gives
> -the display all 120 of samples but the display never gets around
> -to displaying more than 72 (the rest are overwritten with new
> -samples before they are ever displayed).
> -
> -There simply is no difference in the end result. He displays
> -only 72 out of 120 samples. And *nothing* is going to change
> -that short of either a different serial device providing the
> -samples or a faster video device displaying them. Worse yet,
> -there is no way to avoid such a problem short of synchronizing
> -the serial device and the display device to each other!
> -
>
> You missed one of his points though, Floyd. He munges 5 sets of samples
> to detect a certain type of eye movement. So even though every sample won't
> affect the display directly, every sample is involved in that type of eye
> movement. So it still all needs to be collected.
Certainly true. But these are - in my eyes - two distinct problems
which have to be solved one by one: a) capturing all data without loss
b) doing it in a time sensible manner.
> Jens also points out that he wants to react to that type of eye movement
> as soon as it is detectable. This also points to collecting the data as
> quickly as possible.
But one must say, that 'as fast as possible' is not really a valid
specification of realtime requirements. Unless these are not defined,
any 'blind' optimization maybe useless. There are three timelines
involved: a) the scheduler, b) the serial input, c) the screen. By
now only a) and b) were discussed. c) seems to me - as far as X is
involved - quite more problematic to control. And it also seems to be
much more important for Jens' problem. It doesn't count when he gets
hand on the data, as long as he can't tell when changes are displayed
(am I right?).
Are there any ideas? Some time ago I took a look at XSync extension,
but didn't understand it well. Any pointers to that subject are
welcome.
Daniel.
You are *not* losing data. You get every single byte generated.
>is based on velocity. If one frame is missing and lets say the velocity is
>constant, I get the spatial distance between 3 frames and assume that it's
>the difference between 2 frames. From this it follows that the velocity is
>as doubled as high as it really is.
The velocity determination has *nothing* to do with serial port
data read timing. It is determined by the sampling rate in your
serial device and the quantization by that device.
Jens, my background is in telecommunications. Digital telephone
switching systems are one of the largest real time systems that
exist. *Every* time you talk into a analog POTS telephone you
generate 8000 samples per second of data which is then either
transported or analyzed (for everything from spectural
dispersion to making comparisons between isocronous signals and
noise). The significance of sampling rate and frame rates is
well understood. The same is true of various philosophies that
many people never consider until they deal with real time data
sampling, such as... which is better for a que, FIFO or LIFO?
It depends! (Telephone systems will abandon the *oldest* call
attempt in a que (a LIFO)... which most people find odd. But it
is much less likely to be successfully completed that the newest
call attempt.)
At this point, you aren't ready to consider what the above
implies will be significant. But once you get the interface to
the serial port device ironed out, you will need to give
consideration to all of the above in order to maximize
reliability and data integrity.
>> Jens also points out that he wants to react to that type of eye movement
>> as soon as it is detectable. This also points to collecting the data as
>> quickly as possible.
>
>This is exactly what what I'm aiming at.
I have asked several times if there is feedback to the
individual being measured, and you have consistently indicated
there is not. Hence the latency between when a movement happens
and when it is displayed on the screen is insignificant. As
noted, it could be stored for a year and your analysis would
show the exact same results.
>> In short: get it to work first, then figure out what if any optimizations are
>> required.
>
>This is what I'm going to do now. Thanks for the hints and for being patient
>in this, now pretty long, thread.
With too many red fishes laying on the path...
> Am 30/10/03 22:45 Uhr schrieb "David Schwartz" unter
<dav...@webmaster.com>
> in bnsls9$llb$1...@nntp.webmaster.com:
> > If 'read' really does return zero, then the descriptor is hosed. You
> > should not call 'read' on it again but instead 'close' it and, perhaps,
try
> > to re-open it.
> Don't think so, read just returns 0 when there is 0 bytes available. And
> after a few (ok 9000) attempts to read it actually get something to read.
> So I don't think it has something to do with the descriptor.
Then your 'read' is some non-standard function rather than the actual
system 'read' function. If no data is ready and you issue a blocking read,
you'll block until data is available. If no data is ready and you issue a
non-blocking read, you'll get a 'would block' error. A zero return means end
of file.
DS
>> Just to point it out clearly. What I'm trying to do is eye movement analysis
>> in specific saccadic eye movement analysis. The base of my experiments is
>> the detection of fast eye movements in "real-time" and not after a year in
>> the offline data. Basically the experiments are perception experiment where
>> a detail in a scene or animation gets changed while a subject makes fast
>> eye-movements.
>
> Are you saying there is in fact feedback?
>
> Can the person who's eye movement is being measured see the monitor
> screen, and is that person expected to react in any way to what is
> measured?
Yes, they look at the monitor and should react to the change on the screen
which is done depended on the measurements I take.
> It's in the POSIX specification, which is available online for
> browsing at,
>
> http://www.UNIX.org/version3/
>.....
Thanks, I'll take a look at that.
> The losing sync is probably due to the block read and or other
> strangeness in your configuration and the code calling read(),
> but the grouping of time blocks is always going to be as shown,
> and at every 6th time the scheduler gives you a slice (assuming
> the machine is never very loaded) there is going to is going to
> be a part of two data packets instead of just one.
Thanks to your code you posted yesterday, I was able to fix this problem
now. Didn't work directly with your code, maybe due to some old code
fragments, but after I played a little bit around it works well now.
>> I donšt have the manual here at the moment, but I could post a description
>> tomorrow if it's useful. But the package basically has one startbit, so I
>> can identify and sync on this bit.
>
> Do you mean one start byte? Do you mean the first byte is
> unique? What value is it? That's all I really need to know.
No I meant start bit. The first bit of every byte is a frame control bit. It
is set when it's the beginning of a new frame. All other first bits are 0.
Bit number 7 is the frame bit and all the other bits (here d) are data bits.
Byte Bit
7 6 5 4 3 2 1 0
0 1 d d d d d d d
1 0 d d d d d d d
2 0 d d d d d d d
3 0 d d d d d d d
4 0 d d d d d d d
5 0 d d d d d d d
6 0 d d d d d d d
7 0 d d d d d d d
...
>> I was looking for some
>> good code exampled and explanations, but without really knowing what are
>> good and what are worse examples it is kind of hard.
>
> That can be a real problem, because there are some pretty bad
> code examples out there, and no way to spot the pitfalls unless
> you know enough that you don't need the example!
I think this is exactly what I found...a pretty bad example.
>> Is there a good
>> reference out where you can point me to? Not sure if I still need it after
>> this, but maybe for further readings and future projects.
>
> It is generally figured that "Advanced Programming in a Unix
> Environment" (often referred to as just APUE) by the late
> W. Richard Stevens (who was a regular poster to Usenet and had a
> wonderful style that everyone enjoyed) is a prerequisite to
> doing anything serious with UNIX. The book is now a decade old,
> and for example does not talk about Linux. There are several
> websites where the programming examples can be downloaded,
> including patches to make it all run on Linux. Note that there
> are a couple other Stevens books which are equally valued, one
> is "UNIX Network Programming -- Networking APIs: Sockets and
> XTI" Volume 1 2nd Edition, the other is "UNIX Network
> Programming -- Interprocess Communications" Volume 2 2nd
> Edition.
Thanks for the hints. I think I read the name "Advanced Programming in a
Unix Environment" before, should visit the library to take a look.
But I think the basic communication is working now. Had some problems when I
used your code first, for example it the process stopped at the the read
function and continued just after a key was pressed, or sometimes the whole
terminal didn't response. Not sure what it was...tried for 30min and then it
was gone. Anyway it's working now and I'm very grateful for the help you all
offered to me. It's pretty hard to make it work when don't know all the
basics and you don't now which examples are good or not. But I would
appreciate if you still like to upload some good examples which could be
useful.
One annotation to Byron's soft-real-solution. Have you tried to use usleep()
with a parameter greater than 2000. It doesn't work. It rounds the time up
again.
Thank you,
Jens
> But one must say, that 'as fast as possible' is not really a valid
> specification of realtime requirements. Unless these are not defined,
> any 'blind' optimization maybe useless.
But I think Byron is right, as fast as possible is exactly what I want.
I like to grab the data as soon as it arrives in the serial port buffer.
>There are three timelines
> involved: a) the scheduler, b) the serial input, c) the screen. By
> now only a) and b) were discussed. c) seems to me - as far as X is
> involved - quite more problematic to control. And it also seems to be
> much more important for Jens' problem. It doesn't count when he gets
> hand on the data, as long as he can't tell when changes are displayed
> (am I right?).
Of course, but a) and b) were the first problems to solve prior moving on to
c). What do you mean with as far as X is involved. Is X that slow in
refreshing the data on the screen?
> Are there any ideas? Some time ago I took a look at XSync extension,
> but didn't understand it well. Any pointers to that subject are
> welcome.
Same to me so I can see what I'm dealing with.
Jens
Except he has put his fd into raw mode, and depending on the settings
for c_cc[VTIME] and c_cc[VMIN], he may or may not block, but the return
when it does not block will be 0 if no data is available.
All very POSIXly correct.
Do you understand yet the significance of *accurate* data?
Not for your program, but for people trying to explain how this
works! :-) You've previously said there was *no* feedback, and
now you say that is *exactly* what happens.
Except, it sounds as if the feedback loop has a serious amount
of filtering... as in you see something happen and instruct the
testee "Hey, do that again, harder."
Whatever... you've *still* got the same problems and nothing has
changed, and if you try to do that using real-time extensions
you will generate more trouble than not.
Here is what I understand of the functional blocks involved:
The measuring device is taking samples at 8.333ms intervals and
quantizing whatever it is that it measures. It digitizes the
measurement and sends a 24 byte data packet at 38.4Kbps over a
RS-232 serial connection. The OS's serial port hardware buffers
as much as 16 bytes of data, and the device driver is interrupt
driven at a rate which will always stay ahead of the 16 byte
buffer. The device driver has a 4k buffer.
Your program in user space fetches data from the 4k buffer. You
can fetch however many bytes you wish, from 1 to 4k, at a time.
Your process cannot be rescheduled any more often than every
10ms.
You are analyzing the data in 5 packet rolling frames, and
displaying the results on a monitor, with (apparently) all
interpretation being done visually by human eyes.
What's it mean? ...
A human eye cannot respond any faster than about 20 frames per
second, and that is pushing it. You can probably display your
data at a *far* slower rate without the slightest effect. If
the actual interpretation of the visible data requires human
"reaction and response", you certainly have something like 100
to 200 ms of time between when data is displayed and when any
kind of response can be expected. If that is true, a 5 frames
per second display rate is probably sufficient unless you are
trying to measure viewer's reaction time. If so, it should be
closer to the 20 fps, if not it could probably be extended out
to 1 fps and have no effect whatever on your data.
In fact, if you are expecting a human to spot changes that happen
and are then gone all within a time frame of less than one second,
you've got a seriously flawed display concept. Something like
a strip chart, where the past history is available for at least
several seconds is a much better way to display it. Obviously,
that also allows even more latency.
>Thanks to your code you posted yesterday, I was able to fix this problem
There was indeed a major bug in it. I threw that call to
memcmp() into the port configuration function without testing
it, and in fact it does not work (for some interesting reasons
that I'm not yet sure what to do about).
>> Do you mean one start byte? Do you mean the first byte is
>> unique? What value is it? That's all I really need to know.
>
>No I meant start bit. The first bit of every byte is a frame control bit. It
>is set when it's the beginning of a new frame. All other first bits are 0.
>Bit number 7 is the frame bit and all the other bits (here d) are data bits.
>
>Byte Bit
> 7 6 5 4 3 2 1 0
>
>0 1 d d d d d d d
>1 0 d d d d d d d
>2 0 d d d d d d d
>3 0 d d d d d d d
>4 0 d d d d d d d
>5 0 d d d d d d d
>6 0 d d d d d d d
>7 0 d d d d d d d
>...
Great! That *vastly* simplifies framing!
>But I think the basic communication is working now. Had some problems when I
>used your code first, for example it the process stopped at the the read
>function and continued just after a key was pressed, or sometimes the whole
>terminal didn't response.
That sounds as if you were looking at the keyboard and not at
the serial port. Your fd variable probably wasn't the same as
the way I used it (global vs. local, for example).
>Not sure what it was...tried for 30min and then it
>was gone. Anyway it's working now and I'm very grateful for the help you all
>offered to me. It's pretty hard to make it work when don't know all the
>basics and you don't now which examples are good or not. But I would
>appreciate if you still like to upload some good examples which could be
>useful.
Post the exact code you are using now. The code to open the
serial port, the code to configure it, and the read loop that
fetches the data. It might be interesting to see your code that
analyzes the 5 packet frame too, but that might be a bit large?
On the others I'll go through it and give you a detailed discussion
of what happens when you do whatever it is you have, and point out
if that's a good idea or not. Obviously there are a lot of quirks
to serial programming!
> You've previously said there was *no* feedback, and
> now you say that is *exactly* what happens.
Thought I made this clear enough. Sorry if this wasn't the case.
> You are analyzing the data in 5 packet rolling frames, and
> displaying the results on a monitor, with (apparently) all
> interpretation being done visually by human eyes.
No, I'm not displaying the results of the measurements on the screen. I
display something else. Here again in detail what I'm trying to do. As I
described before I like to do some perception experiments. Perception of the
human eye and specific perception during saccadic eye movements. The first
step for this is to setup some "change blindness" experiments.
The simple case is to have 2 pictures, lets say pictures of a desk. Both are
exactly the same pictures except of one detail...lets say the mouse is
missing on one picture and not on the other. If you display these pictures
without any gap one after another, you will definitely see the difference
immediately. But if you show a completely white picture for about 80ms in
between, you won't see the difference and it takes you a while to figure out
what the difference is.
Saccadic eye movements are similar to this white picture. This is why I need
to detect them as fast as possible to have enough time to change the
pictures before the saccade end. The aim of this experiments is to see if
this changes are being detected or not.
> Post the exact code you are using now. The code to open the
> serial port, the code to configure it, and the read loop that
> fetches the data. It might be interesting to see your code that
> analyzes the 5 packet frame too, but that might be a bit large?
Ok here comes the initialization function. Maybe there are still some
fragments left...didn't clean it up yet because I'm in a hurry.
int serial_open(char *portName, int portSpeed)
{
int fd = 0; // file descriptor
//printf("hello\n");
/* open the specified serial port in for read and write O_RDWR */
/* NO_CTTY makes it a non-controlling TTY. */
fd = open(portName, O_RDWR | O_NOCTTY);
if (fd < 0) {
perror(portName);
return -1;
}
/* Get current port configuration */
tcgetattr(fd, &tty);
/* raw io, hardware flow control, 8n1 */
tty.c_iflag = IFLAGS; /* input flags */
tty.c_cflag = CFLAGS; /* control flags */
tty.c_lflag = 0; /* local flags */
tty.c_oflag = 0; /* output flags */
tty.c_cc[VMIN] = 1; /* wait for 1 character */
tty.c_cc[VTIME] = 0; /* turn off timer */
#ifdef __linux__
/* for linux only */
// tty.c_line = N_TTY; /* set line discipline */
#endif
cfsetospeed(&tty, BAUD); /* set bit rate */
cfsetispeed(&tty, BAUD);
if (tcsetattr(fd, TCSADRAIN, &tty) || tcgetattr(fd, &stty)) {
return -1;
}
/* verify the changes were actually made */
memcmp(&tty, &stty, sizeof tty);
//try and configure the port appropriately...
// tcflush(fd, TCIFLUSH);
//tcsetattr(fd, TCSANOW, &stty);
return fd;
}
int serial_close (int fd)
{
close (fd);
return 1;
}
int serial_flush(int fd)
{
return tcflush (fd, TCIOFLUSH);
}
int serial_send_cmd(int fd, char *cstr)
{
serial_write(fd, cstr, strlen(cstr));
return 1;
}
int serial_write (int fd, char *data, int len)
{
int ol;
ol = write (fd, data, len);
if (ol < len) {
perror ("write()");
return -1;
}
return ol;
}
int serial_read (int fd, char *buff, int len)
{
int c;
c = read (fd, buff, len);
// printf("serial read done. %d bytes read\n\n",c);
return c;
}
And here comes the function which reads the data.
int
EyeTracker::update()
{
long t0, t1, t2, t;
gettimeofday( &tv, NULL );
t0 = tv.tv_sec; /* Start time in seconds */
while(1){
gettimeofday( &tv, NULL );
t1 = tv.tv_sec*1000 + tv.tv_usec;
char buf[255];
int bytes = 0;
int res = 0;
int sofar = 0;
int fails = 0;
while((res = serial_read(fileDesc, buf, 1)) == 1 && !(buf[0] & 0x80)){
fails++;
printf("FAILED TO DETECT FRAMEBIT, res:%d look at the next bit\n",
res);
}
printf("bytes read:%d\n", res);
bytes = serial_check_buffer(fileDesc);
printf("bytes now in buffer:%d\n",bytes);
/*find the framebit, then read 24 bytes from the buffer*/
fails = 0;
sofar = 1;
while (sofar < 24){
if ((res = serial_read(fileDesc, &(buf[sofar]), 24 - sofar)) <= 0){
fails++;
}
else{
sofar += res;
printf("Fails:%d\n",fails);
printf("bytes read:%d\n", sofar);
}
}
gettimeofday( &tv, NULL );
t2 = tv.tv_sec*1000 + tv.tv_usec; /* ms */
t = t2 - t1;
printf("delay: %d us\n\n", t );
}
return 1;
}
>
> On the others I'll go through it and give you a detailed discussion
> of what happens when you do whatever it is you have, and point out
> if that's a good idea or not. Obviously there are a lot of quirks
> to serial programming!
Sounds good...sorry for possible fragments...haven't checked the code yet. I
take a look at it tommorow and clean it up is there is something left.
But it definitely works like this.
Thanks, Jens
"> However, if this data is being *used* for something that is
> time critical, then yes there is a problem. For example, if
> you need to read the data, and send feedback to the device
> *before* it sends another packet of data... that is a
> serious real time constraint that a standard Linux kernel is
> not going to provide.
I don't need to send a feedback, but how I described, I need
the data pretty was after the eye-movement occurred."
"I don't need to send a feeback", but you DO!
Whatever, I did a little research on saccadic eye movement,
and I just don't see how you are going to do anything with
8ms increments.
> Am 31/10/03 11:53 Uhr schrieb "Daniel Hofmann" unter <daniel...@gmx.de>
> in 867k2l4...@gmx.de:
>
>> But one must say, that 'as fast as possible' is not really a valid
>> specification of realtime requirements. Unless these are not defined,
>> any 'blind' optimization maybe useless.
>>
> But I think Byron is right, as fast as possible is exactly what I want.
> I like to grab the data as soon as it arrives in the serial port buffer.
What does 'as fast as possible' mean. If it means 'within
a nanosecond' all suggestions until now won't work. If it means
'within 10ms' it means something completely different regarding
programming strategies. So you should definitely define 'as fast as
possible'.
>
>>There are three timelines
>> involved: a) the scheduler, b) the serial input, c) the screen. By
>> now only a) and b) were discussed. c) seems to me - as far as X is
>> involved - quite more problematic to control. And it also seems to be
>> much more important for Jens' problem. It doesn't count when he gets
>> hand on the data, as long as he can't tell when changes are displayed
>> (am I right?).
>
> Of course, but a) and b) were the first problems to solve prior moving on to
> c). What do you mean with as far as X is involved. Is X that slow in
> refreshing the data on the screen?
'Realtime' is not at first about absolute speed but about well defined
timely behaviour. X is an asynchronous protocol. You have (in stock X)
no guaranties about when a request by a client is performed by the
server. Especially there is no easy way to sync on black retrace
AFAIK. The mentioned XSync extension deals with this problem, but I
didn't get it t work some month ago. IIRC it is not well supported by
all drivers, but I am not sure. So I'd be very interested in any
experiences with it.
Daniel.
> Am 31/10/03 11:53 Uhr schrieb "Daniel Hofmann" unter <daniel...@gmx.de>
> in 867k2l4...@gmx.de:
>
>> But one must say, that 'as fast as possible' is not really a valid
>> specification of realtime requirements. Unless these are not defined,
>> any 'blind' optimization maybe useless.
>>
> But I think Byron is right, as fast as possible is exactly what I want.
> I like to grab the data as soon as it arrives in the serial port buffer.
What does 'as fast as possible' mean. If it means 'within
a nanosecond' all suggestions until now won't work. If it means
'within 10ms' it means something completely different regarding
programming strategies. So you should definitely define 'as fast as
possible'.
>
>>There are three timelines
>> involved: a) the scheduler, b) the serial input, c) the screen. By
>> now only a) and b) were discussed. c) seems to me - as far as X is
>> involved - quite more problematic to control. And it also seems to be
>> much more important for Jens' problem. It doesn't count when he gets
>> hand on the data, as long as he can't tell when changes are displayed
>> (am I right?).
>
> Of course, but a) and b) were the first problems to solve prior moving on to
> c). What do you mean with as far as X is involved. Is X that slow in
> refreshing the data on the screen?
'Realtime' is not at first about absolute speed but about well defined
>> But one must say, that 'as fast as possible' is not really a valid
>> specification of realtime requirements. Unless these are not defined,
>> any 'blind' optimization maybe useless.
> But I think Byron is right, as fast as possible is exactly what I want.
> I like to grab the data as soon as it arrives in the serial port buffer.
IMHO, the linux sheduler will give you the timeslice right after the
serial interrupt handler has read data from the port. at least, if your
program is still blocked in the read(), it should wake up immidiatly.
i don't think realtime sheduling can help in that, as long as there is no
other very high cpu load process running.
best regards ...
clemens
>>> But one must say, that 'as fast as possible' is not really a valid
>>> specification of realtime requirements. Unless these are not defined, any
>>> 'blind' optimization maybe useless.
>
>> But I think Byron is right, as fast as possible is exactly what I want. I
>> like to grab the data as soon as it arrives in the serial port buffer.
>
> IMHO, the linux sheduler will give you the timeslice right after the
> serial interrupt handler has read data from the port.
What's that opinion based on? The last time I got out an oscilloscope and
actually checked, the default scheculer policy does _not_ wake up a task
blocked on a read immediately. It happens at the next 10ms tick.
> at least, if your program is still blocked in the read(), it should wake up
> immidiatly.
Maybe it should, but according to my tests it doesn't. How did you
determine that it should behave this way?
> i don't think realtime sheduling can help in that,
Actual testing indicates that it can.
> as long as there is no other very high cpu load process running.
--
Grant Edwards grante Yow! Where's th' DAFFY
at DUCK EXHIBIT??
visi.com
> Agreed. There's not apparent reason (note the word apparent here) why the
> scheduler is only run at 10ms intervals.
It's mostly historical. Unix is a time-slicing system, and there's a trade
off between system response time and overhead. 100HZ seemed like a good
compromise back in the day. Processors are _much_ faster now, so increasing
it to 1KHz would be quite reasonable, and I believe that's going to happen.
> -> Note from the above run you don't want to use select. You want to keep running
> -> track of the last time you did a read, sleep until 8333 uS after the last read
> -> then read again. The difference with the real-time schedule is that as soon
> -> as the 8333 uS elapses, the kernel will immediately preempt whatever is running
> -> and restart your process.
> -
> -If you want to try that, you'd better do a couple other things as well:
> -
> - 1) Disable the FIFO on your UART so you get an interrupt for every byte.
> - It'll increase overhead, but it'll play hell with your timings
> - otherwise.
> -
> - 2) Set the low-latency flag on the device. By default, the serial driver
> - buffers up rx bytes and only pushes them up to the tty line discipline
> - layer once every (you guessed it!) 10ms. Setting the low-latency flag
> - will send the bytes up on every interrupt.
>
> Can you set the FIFO level?
Not that I remember.
> I think the driver generally sets it up to
> interrupt when it fills with 14 bytes (I think).
Something like that.
> Also is there an ioctl that forces a FIFO flush even though an interrupt hasn't
> occured yes? I think that would be better than disabiling the FIFO and
> generating an interrupt each and every received byte.
I believe that's your only practical option. IIRC, that can be done by
using setserial to force the UART type to 16450.
> -> Correct. So make the process run on the real time scheduler, Make sure that
> -> it usleeps when it's waiting, and it'll wake up right on time.
> -
> -But if you don't set up the serial driver correctly, there probably won't be
> -any data there when you wake up: it'll be sitting int he UART Rx FIFO or in
> -the serial driver's receive buffer.
>
> Cool beans. I never looked at serial.c that closely.
Don't. You'll go blind. :)
The serial driver code has deteriorated over the years to the point where
it's being completely thrown out and re-written from scratch.
> When you call a timer function or better lets say you call the function
> usleep and in man is written:
>
> "The usleep() function suspends execution of the calling process until
> either microseconds microseconds have elapsed or a signal is delivered to
> the process and its action is to invoke a signal-catching function or to
> terminate the process. System activity may lengthen the sleep by an
> indeterminate amount."
>
> Of course I'm frustrated that I don't get the results I thought I would
> get.
Did you not read the last sentence in the man-page paragraph you quoted?
"System activity my lengthen the sleep by an indeterminte amount."
To be fair, he was asked if he needed to send feedback to the detecting
device which is producing the serial data stream. As I understand it, he
does not. He does, however need to provide feedback to the subject of the
experiment whose eye measurements are being detected. That would have
answered the question "why do you need real-time data acquisition" that was
asked so many times in this thread.
> Whatever, I did a little research on saccadic eye movement, and I just
> don't see how you are going to do anything with 8ms increments.
--
Said user is part of the system which is producing the serial data stream.
The user's eye motion determines what goes up on the screen and that
influences the user's next eye motion: a closed-loop control system.
We don't know the transfer function for the user (if we did his experiment
would be superfluous). However, we can safely say that it includes a
low-pass filter with a corner around 10Hz or so and about a 100ms transport
delay. This makes struggling for milliseconds pointless.
--
John Hasler
jo...@dhh.gt.org (John Hasler)
Dancing Horse Hill
Elmwood, WI
True, but the question asked specifically about the device producing the
serial data frames (the subject is not producing serial data frames) and
whether feedback had to be provided after each frame and before the next
frame would be generated. The answer to that question was no.
> The user's eye motion determines what goes up on the screen and that
> influences the user's next eye motion: a closed-loop control system.
Certainly.
> We don't know the transfer function for the user (if we did his experiment
> would be superfluous). However, we can safely say that it includes a
> low-pass filter with a corner around 10Hz or so and about a 100ms transport
> delay. This makes struggling for milliseconds pointless.
Based on a handful of undergrad psych classes, I would agree.
However, if he wants to spec some real performance requirements, (max
latency and max phase jitter), then we can try to figure out if it's
possible under Linux. I think it is possible to get farily decent
performance out of the serial driver and scheduler if you know exactly what
you're doing. When it comes to the latency and jitter caused by using an X
server to change the display, I'm completely in the dark... even measuring
it is going to be tricky...
That was a "For example...". The question was what is being
done with the data that makes it time critical, and the answer
was there is no feedback loop. The fact is, there is a feedback
loop to the device making the measurements. I goes via the
monitor and the subject rather than via the serial port, but
that wasn't part of the question.
>does not. He does, however need to provide feedback to the subject of the
>experiment whose eye measurements are being detected. That would have
>answered the question "why do you need real-time data acquisition" that was
>asked so many times in this thread.
>
>> Whatever, I did a little research on saccadic eye movement, and I just
>> don't see how you are going to do anything with 8ms increments.
With an 8ms sampling rate he can't really measure anything with
less than 16ms intervals between events. In fact, what he is
trying to measure actually appears to happen in less than 10ms,
and at least some individual events are apparently about 1ms in
length (I'm not sure if those are of any interest though.)
Regardless, the problem doesn't really change much. The OS can
analyze his data faster than it can be displayed, and faster than
a human can react to it. Now we just throw in the added problem
that the data isn't going to be very useful for measuring the
velocity of eye movement except as a very gross approximation.
>> does not. He does, however need to provide feedback to the subject of the
>> experiment whose eye measurements are being detected. That would have
>> answered the question "why do you need real-time data acquisition" that was
>> asked so many times in this thread.
>>
>
> With an 8ms sampling rate he can't really measure anything with
> less than 16ms intervals between events. In fact, what he is
> trying to measure actually appears to happen in less than 10ms,
> and at least some individual events are apparently about 1ms in
> length (I'm not sure if those are of any interest though.)
Sorry I don't get your point. I don't try to measure events with less than
16ms intervals in between. I try to measure saccades which lasts at least
around 80-90ms so should get enough samples in between. But since I can't
influence given delay's by the device and monitor etc. I need to handle the
rest as fast as possible. And this is why I need real-time data acquisition.
>
> Regardless, the problem doesn't really change much. The OS can
> analyze his data faster than it can be displayed, and faster than
> a human can react to it. Now we just throw in the added problem
> that the data isn't going to be very useful for measuring the
> velocity of eye movement except as a very gross approximation.
I think it's still useful. Why do you think it isn't anymore?
Jens Schumacher
>> But one must say, that 'as fast as possible' is not really a valid
>> specification of realtime requirements. Unless these are not defined,
>> any 'blind' optimization maybe useless.
> But I think Byron is right, as fast as possible is exactly what I want.
> I like to grab the data as soon as it arrives in the serial port buffer.
IMHO, the linux sheduler will give you the timeslice right after the
serial interrupt handler has read data from the port. at least, if your
program is still blocked in the read(), it should wake up immidiatly.
i don't think realtime sheduling can help in that, as long as there is no
other very high cpu load process running.
best regards ...
clemens
>> But one must say, that 'as fast as possible' is not really a valid
>> specification of realtime requirements. Unless these are not defined,
>> any 'blind' optimization maybe useless.
> But I think Byron is right, as fast as possible is exactly what I want.
> I like to grab the data as soon as it arrives in the serial port buffer.
IMHO, the linux sheduler will give you the timeslice right after the
Alexei.
Jens Schumacher wrote:
> Hello,
>
> I need to collect data from the serial port every 8ms since the device
> connected to the port works at 120Hz. This doesn't seems to work properly
> due to the 10ms time slice interval of a normal i386 kernel. The
> communication to the serial port is done in user space at the moment and is
> not loaded as a module. I read a lot in the history of the newsgroup and
> there are some solutions recommended. From low latency kernel patches to the
> use of real-time Linux. But I'm pretty new to Linux and feel not comfortable
> by patching the kernel since I don't know how a low latency path affects
> other programs on the system.
>
> It is important that I get the data nearly every 8 ms because I don't want
> the serial port buffer to overflow and loose data. But the use of real-time
> Linux seems to be a overhead to me.
>
> What about soft-real-time? I looked at the sched* functions, but could'nt
> get it running as fast as I want.
> What about using the rtc?
>
> Is it also possible to run the application at that frequency to work with
> the data I get?
>
> Thank you very much,
>
>
> Jens Schumacher
>
In order to get below 10ms resolution, you'll need to make it run as a
kernel module and use gettimeofday()
I was referring to the original post, which requested 8ms (thats
milliseconds) resolution. If the original post wanted microseconds
(us) that wasn't indicated. The lowest time "tick" in user space is
10 ms (again, milli). High priority or not, you won't be able to
acheive 8ms resolution outside of kernel space.
I work with low latency serial devices and drivers all the time. I
would (and have) create this application with the data acquisition as
a kernel module, and the data "execution" portion as a bottom end
tasklet.