Questions about 'registration.dat' in 3DMAD dataset

25 views
Skip to first unread message

Jimmy Sun

unread,
Sep 1, 2016, 5:48:06 AM9/1/16
to bob-devel
Dear all,

I have applied for 3DMAD dataset (https://www.idiap.ch/scientific-research/resources/3dmad) and I find the Color and Depth images in that dataset need calibration. However, there is no detailed description about registration.dat. I am bad at Python and I cannot locate the usage of the file in public access codes (such as http://pypi.python.org/pypi/xbob.db.maskattack and https://pypi.python.org/pypi/maskattack.lbp).

Could anyone explain how to use this mystery binary file, or point out how it is used in Bob python project? 

Thank you very much!

Nesli

unread,
Sep 1, 2016, 7:04:43 AM9/1/16
to bob-devel
Hi Jimmy.

You can find how the registration.dat file is used in the maskattack/lbp/script/calclbp.py file.

The registration file was created using a customized version of libfreenect: https://github.com/nerdogmus/libfreenect

Reading the file:
reg_file = os.path.join(args.inputdir,'documentation','registration.dat')
f
= shelve.open(reg_file)
reg
= f['reg_data']
f
.close()

For every x,y point on the depth image (frame), you can retrieve the metric depth:
metric_depth = reg['raw_to_mm_shift'][frame[y,x]]

For some x,y the depth map may not have a valid depth value. In this case, in the code, the average of the valid depths in 5x5 neighborhood is calculated.

Using this metric depth and the index of the x,y point (In the code, since Kinect image is 640x480, the index is calculated as index = y*640+x):
nx = (reg['registration_table'][index][0]+reg['depth_to_rgb_shift'][metric_depth])/256
ny = reg['registration_table'][index][1]

nx and ny are the color image coordinates for x,y on the depth image.

Since the registration is based on depth of the pixel, it can only be in one direction: from depth image to color image. However, the eye coordinates are manually annotated on color images.
For this reason, a search is done on the depth image for all pixels in a neighborhood of the eye coordinates (searching whole image is unnecessary since the shift is limited) until the correct
nx and ny values are found that match the annotated eye coordinates.

Nesli

Jimmy Sun

unread,
Sep 1, 2016, 8:39:44 AM9/1/16
to bob-devel
Hi Dr. Erdogmus,

Thank you very much for the detailed reply!

It seems I have to install libfreenect first and then use python script to get the registered depth image. 
I will try it out as your instructions. Thank you very much!

BTW, it seems I should have contacted you with your Email address earlier. Sorry for disturbing one more Professor, so sorry :(

Manuel Günther

unread,
Sep 1, 2016, 1:02:34 PM9/1/16
to bob-devel
Dear Jimmy,

you did the right choice when you wrote to this email list. As this is public (in opposition to sending private emails), other users with the same or a similar problem might read this description and do not need to write yet another private email with the same question. 
Hence, you are rather saving us time than disturbing us :-)

Manuel

Jimmy Sun

unread,
Sep 1, 2016, 10:19:42 PM9/1/16
to bob-devel
Dear Manuel,

Thank you very much. I will contact Dr. Erdogmus in this post if I encounter further questions.

Thanks for the detailed description by Dr. Erdogmus.

Jimmy Sun
Reply all
Reply to author
Forward
0 new messages