Brickle Macho
unread,May 17, 2013, 1:15:27 PM5/17/13Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to scikit...@googlegroups.com
I porting a 8 line Matlab script. Basically I read in am image, down
sample the image, perform some FFT operations and output a final
smoothed image. I am porting this to a standalone function. The
problem is that I am getting different from my python script to what
Matlab was outputs
After comparing the contents of the array/matrix in python/matlab I
noticed the values were different after the image was converted to
grayscale. That is, when the image is read in, the values are the same,
once converted they begin to differ around the first/second decimal place.
If I read in the file using ndimage.imread( flatten=true) then I get the
same value as matlab correct to 5 decimal places, whereas using
color.rgb2gray() is only correct to 2 decimal places. In the final
version of the code I will only have access to the numpy array after it
has been loaded into memory, so using imread() was just for
tracing/locating/identifying the problem. Here is a snippet of code:
>>> gray = ndimage.imread('1.jpg',flatten=True)
>>> gray /= gray.max()
>>> gray
array([[ 0.30133331, 0.2895686 , 0.28172547, ...
...
>>> gray2 = color.rgb2gray(rgb)
>>> gray2
array([[ 0.31065608, 0.29889137, 0.29104824, ...,
...
I believe this difference is causing the problem. Note if I convert the
image to grayscale using a external tool and read this in then the
values of the numpy array match similar/same to Matlab matrix.
So what is the difference between converting an array to gray scale
verse reading it in as grayscale? Have I done something wrong? Is
there another way to convert a numpy array to grayscale?
Any help appreciated.
Michael.
--