Dear All,
I am quite new in deep learning. Recently I have read a paper titled "Multimodal Deep Learning for Robust RGB-D Object Recognition" by Andreas Eitel et al. In this paper, they propose a encoding technique of depth images to convert it into 3 channel on Washington RGB-D Object Dataset. Their methods are the following:
"First normalizes all depth values to lie between 0 and 255. Then, apply a jet colormap on the given image that transforms the input from a single to a three channel image (colorizing the depth). For each pixel (i; j) in the depth image d of size Wx H, map the distance to color values ranging from red (near) over green to blue (far), essentially distributing the depth information over all three RGB channels."
I tried the technique on single depth image but my resulting image was not same as the images described on the Andreas Eitel's paper. I tried with the following code in Matlab:
img = imread('keyboard_1_2_26_depthcrop.png');
normImage = single(img) / single(max(img(:))) *255;
figure; colormap(jet(256)); image(normImage);
Can any one have idea how to normalize the whole depth images of Washington RGB-D Object dataset as described on that paper? if possible give a sample code.... in Matlab or Python