Question about unexpected events when illuminating light with projector

154 views
Skip to first unread message

Hyunwoo Kim

unread,
Jul 31, 2023, 11:02:24 PM7/31/23
to davis-users
Hi there,

I'm trying to implement 3D reconstruction with structured light method using DAVIS 346 and DLP 4500 projectors.
But after projecting light onto the object, I checked unexpected events.
When looking at the event data for the scene after projecting the whole white pattern, only ON events were expected to occur, but OFF events also occurred.
You can see this in the attached picture. (Blue points represent ON events and red points represent OFF events.)
The same phenomenon occurred not only when the DLP projector but also when the light was projected using a flashlight.

And if you look at the plotted event data, events do not occur at the same time, but appear differently depending on the x-axis. It is observed that the event occurred later as the x position of the pixel decreases.

I wonder why these phenomenons occured.

Thanks,
Kim
event_plot.png
after_illumination.png

Marwan Ibrahim

unread,
Aug 2, 2023, 6:23:31 AM8/2/23
to davis-users
Hello Kim, 

Regarding the occurrence of both ON and OFF events, this is expected for a flashing pattern. If the pattern was not flashing, you would expect to see only one burst of ON events when the pattern is illuminated, followed by no events as long as the pattern is uniformly illuminated. Instead, what you observe is a periodic stream of ON and OFF events. A DLP 4500 projector uses the quick switching of micromirrors to generate different light intensities (refer to p. 42 of the User Guide for an explanation of this), so it is expected for the pattern to be flashing. The same occurs with PWM-controlled flashlights, where again you expect to see a periodic stream of ON and OFF events.
  
As for the unexpected behavior you observe with the event timestamps, my best guess would be a high readout latency due to the asynchronous readout scheme of the DAVIS 346. When the event rate is higher than the maximum readout speed, such as in the case you tested here, the asynchronous readout scheme of the DAVIS 346 behaves more as a high latency, synchronous readout scheme, as for each pixel that is checked if it contains event data, an event will be available for transmission. The result is a high readout latency under high event rate scenarios, resulting in simultaneous events having very different timestamps. An explanation for this behavior can be found in this white paper comparing asynchronous and synchronous readout schemes. 

Having this readout latency be influenced by the x position of the pixel may additionally have to do with your operation of the DLP projector, or other factors that influence the distribution of event data over the sensor array. 

In any case, for such high event rate applications, it is recommended to use a camera with a synchronous readout scheme (such as the DVXplorer camera), which provides a higher readout speed at the expense of higher power consumption and a lower temporal resolution. We had tested a similar structured light setup consisting of a DVXplorer camera and a DLP 4500 projector, and achieved good results under both low and high event rate conditions. Attached is a picture of binary structured light projections at over 2kHz, and the generated event data, colored from oldest (blue) to latest (red) events. 


Hope this answers your question.

Kind regards,
Marwan

Hyunwoo Kim

unread,
Aug 3, 2023, 5:06:52 AM8/3/23
to davis-users
Hello Marwan,

Thank you for your reply.
It is well understood that the time at which events are observed per pixel is due to the readout speed, but I'm still curious about unexpected events.
I've read the DLP 4500 user guide you sent me, but when I tested it, I used a 1 bit image (whole white image).
Therefore, it is thought that flashing will not occur to express the intensity of the pixel.
And after the projector first projected light, no event occurred until the light was turned off.
Then, is it most appropriate to assume that the reason for this problem is because of the microscopic vibration of micromirrors?
And I'm sorry, but I can't see the attached image. It would be nice to give it to me as a file attachment.

Many thanks,
Kim
2023년 8월 2일 수요일 오후 7시 23분 31초 UTC+9에 Marwan Ibrahim님이 작성:

Marwan Ibrahim

unread,
Aug 3, 2023, 10:56:21 AM8/3/23
to davis-users
Hello Kim,

Thank you for the clarification.

You are correct in assuming that projecting a white image as a single bit image using a DLP projector should result in next to no flashing in the output projection.
If I understand correctly, you observe a sudden burst of events when turning on the projection, followed by no events, and another burst of events when turning off the projection?

If so, I believe there are a few possible explanations why would observe both positive and negative polarity events during the first burst of events. These include:
  1. Readout latency. Within each pixel in a DVS/DAVIS camera, a comparator circuit is used to compare the previous light intensity to the current light intensity to determine the event polarity (refer to this paper for an explanation of the circuit design). If I am not mistaken, for the DAVIS, the previous light intensity that is used for the comparison is reset after each event readout. This means that under a high readout latency, the previous light intensity could be reset to a lower/higher value than what is expected, resulting in both positive and negative polarity events. Note that in this case, you should expect more spurious negative polarity events at regions of high readout latency, as those pixels where exposed to light for a longer time due to latency, meaning there is a bigger difference between the previous and current light intensity for those pixels. This coincides with the plot your provided, where you observe more negative events towards the left side of the sensor array, which shows a higher latency observed in higher event timestamps.
  2. DLP projections. Most illumination sources exhibit some form of ramp up/delay during illumination, so that what is projected is not exactly a sudden step increase in brightness, but rather a gradual increase of brightness with some delay. While this is usually minor, it can influence the generated light intensity and the resulting events. A few factors that may influence this include the type of light source used in the DLP projector (LED, laser-based, etc.), the operation temperature of the DLP, etc.
  3. Imperfections in DAVIS sensor array. Imperfections in the analog circuit for each DAVIS pixel could result in spurious negative events being generated. This should usually be minimal (limited to a small percentage of the overall sensor array) and spurious (not following a specific pattern).   
 I do not believe microscopic vibrations of DLP micromirrors are one for the unexpected event polarities, since small changes of the micromirror tilt angles should not result in sufficient intensity changes to generate many negative polarity events.  

Please also find attached the images from before showing dense structured light projections from a DLP 4500 captured using a DVXplorer camera. Note that events are colored from oldest (blue) to latest (red).

Hope this answers your question.

Kind regards,
Marwan

Binary_pattern_projections.png
Binary_pattern_3D_events.png

Hyunwoo Kim

unread,
Aug 7, 2023, 11:16:19 AM8/7/23
to davis-users
Hello Marwan,

Thank you for your reply.
I think it is correct that unexpected events were observed because of the readout latency. 
To reduce the number of events that occur, I simply projected one line and shifted it. 
As a result, events turned out well as expected.
Your answers were very helpful and I appreciate it.

Then, I have an additional question about readout latency.
If the high event rate causes high readout latency, is it possible to observe only the events of the pixels of interest and ignore the events of the other pixels to reduce readout latency?

Thank you for your help.
Kim

2023년 8월 3일 목요일 오후 11시 56분 21초 UTC+9에 Marwan Ibrahim님이 작성:

Marwan Ibrahim

unread,
Aug 8, 2023, 5:19:27 AM8/8/23
to davis-users
Hello Kim,

You bring up a very interesting theoretical point for event-based structured light systems. 

Short answer:
Yes. As a high event rate causes high readout latency, it is possible to reduce the readout latency for event cameras by defining regions of interest (ROIs) for readout. 

Long answer:
There are multiple reasons why you would want to do such filtered readout for a structured light system. By design, event cameras are designed for the readout of sparse asynchronous data, while structured light systems benefit from dense, synchronous projections (as using fewer denser projections means you can estimate depth for each pixel faster). Thus, combining dense projections with some form of ROI filter offers the best of both worlds, allowing for fast depth estimation only for regions of interest that require depth estimation. This is directly linked to the topic of "guided depth sensing", which is an active area of research where a secondary, "guiding" camera is used to detect regions of interest in the scene to estimate depth for while a structured light system is used to estimate depth for that region of interest (see this paper for a brief introduction into the topic). Funnily enough, my master's thesis topic was exactly on this, where we developed with iniVation a faster-than-real-time event-guided depth sensing system capable of spatially asynchronous scanning of objects at speeds of over 1 kHz (using 2 DVXplorer cameras and a DLP 4500 projector).

From a technical point of view, my only recommendation is to define the region of interest from the projector side rather than from the camera side. 

Most iniVation cameras, such as the DAVIS 346 and DVXplorer, come with a native ROI filter that allows directly cropping event data from the camera before sending out over USB. These filters, however, operate differently for different cameras. For the DVXplorer camera, the internal camera chip directly supports both an ROI filter (defined as a rectangle of the entire sensor resolution) and an area blocking filter (which blocks out event data from different 32x32 blocks of pixels). The area blocking filter directly blocks event data from blocks of pixels, resulting in a reduced event rate before timestamping, and thus a lower readout latency. This is not the case for the ROI filter, which if I am not mistaken filters event data only after timestamping, meaning that it has minimal effect on the readout latency. The DAVIS 346 camera only offers an ROI filter on the FPGA, so setting such ROI filter should not significantly reduce readout latency (despite reducing the output event rate received over USB).

On the otherhand, setting the region of interest on the DLP 4500 can be simply done through the masking of projected bit images. To allow for the fast adaptation of the projected bit images, you can operate the DLP in video pattern mode, and use masking operations on the GPU to mask the frames provided to the DLP projector over HDMI, which can then project individual bit images from the received RGB frames. Note that this is rather intensive, and I would recommend taking a look at the DLP 4500 User Guide for an explanation of the operation of the projector in such mode.

In any case, here is an explanation for how to setup the ROI filter for the event camera. Through dv-runtime, enabling the ROI filter is possible through the "crop" option in the input module. Through dv-processing, enabling the ROI filter is possible through the deviceConfigSet method of the CameraCapture class, where the configurations for setting the ROI can be found here. For example, using the C++ API for dv-processing, you can set the region of interest for a DAVIS 346 camera to a rectangle with x coordinates ranging from 10-20 and y coordinates ranging from 30-40 as follows:

//Initialize camera capture
dv::io::CameraCapture capture;

//Setting the x-coordinate range
const uint32_t xStart = 10, xEnd = 20;
capture.deviceConfigSet(DAVIS_CONFIG_DVS, DAVIS_CONFIG_DVS_FILTER_ROI_START_COLUMN, xStart);
capture.deviceConfigSet(DAVIS_CONFIG_DVS, DAVIS_CONFIG_DVS_FILTER_ROI_END_COLUMN, xEnd);

//Setting the y-coordinate range
const uint32_t yStart = 30, yEnd = 40;
capture.deviceConfigSet(DAVIS_CONFIG_DVS, DAVIS_CONFIG_DVS_FILTER_ROI_START_ROW, yStart);
capture.deviceConfigSet(DAVIS_CONFIG_DVS, DAVIS_CONFIG_DVS_FILTER_ROI_END_ROW, yEnd);

Hope this answers your question.

Kind regards, 
Marwan

Hyunwoo Kim

unread,
Aug 10, 2023, 1:47:06 AM8/10/23
to davis-users
Hello Marwan,

Thank you very much for your answers.
As you said, when I reduced the projector range, I could see that events occured more accurately!
And based on your example code, I successfully set the camera ROI filter using the C++ API.
I fully understand the points I was curious about, and thank you again for your help.

Best regards,
Kim

2023년 8월 8일 화요일 오후 6시 19분 27초 UTC+9에 Marwan Ibrahim님이 작성:
Reply all
Reply to author
Forward
0 new messages