Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Stereogram decoder?

794 views
Skip to first unread message

Andrew Jones

unread,
Sep 20, 2003, 6:37:54 PM9/20/03
to
Is there such a thing as a stereogram (magic eye, 3d image) 'decoder'?
Is it even possible to do that?

--
Andrew Jones
[please remove 'spluc.' from my
email address to contact me]

Carsten Neubauer

unread,
Sep 21, 2003, 10:09:52 AM9/21/03
to
hi andrew,

>Is there such a thing as a stereogram (magic eye, 3d image) 'decoder'?
>Is it even possible to do that?

why not? somebody must have done that already...

you might have noticed, that the depth we seem to see,
depends on the horizontal distance between repeating
patterns.
so you would have to process line by line.
take a pixel p1, read its color and search in x-direction for
a similar pixel p2. if the pixels around p2 are similar to the
pixels around p1, you found a match.
store this match-distance as luminance in another bitmap
and perhaps remove some noisy miscalculated pixels,
which differ too much from their neighbours.
and voila, you have decoded the magic image.

carsten neubauer
http://www.c14sw.com/

John Chewter

unread,
Sep 21, 2003, 5:18:01 PM9/21/03
to
oh - that it were so simple.......

Hmmm
many books written and papers on this and allied themes
Nobody seems to have completely cracked it.


"Carsten Neubauer" <cne...@aol.comNOJUNK> wrote in message
news:20030921100952...@mb-m05.aol.com...

Martin Leese

unread,
Sep 21, 2003, 6:29:32 PM9/21/03
to
Carsten Neubauer wrote:
> hi andrew,
>
> >Is there such a thing as a stereogram (magic eye, 3d image) 'decoder'?
> >Is it even possible to do that?
>
> why not? somebody must have done that already...
>
> you might have noticed, that the depth we seem to see,
> depends on the horizontal distance between repeating
> patterns.
> so you would have to process line by line.

There are certainly algorithms that "decode" random dot
stereograms. See, for example, the late David Marr's
classic book "Vision: A Computational Investigation into
the Human Representation and Processing of Visual
Information", 1983.

The differenece between random dot and magic eye is that
the random dot has a unique solution whereas, I guess,
the magic eye does not. Algorithms to "decode" random
dot stereograms would be an excellent place to start.

--
Regards,
Martin Leese
E-mail: ple...@see.Web.for.e-mail.INVALID
Web: http://members.tripod.com/martin_leese/

Carsten Neubauer

unread,
Sep 22, 2003, 10:07:23 AM9/22/03
to
Martin Leese wrote:
>...

>The differenece between random dot and magic eye is that
>the random dot has a unique solution whereas, I guess,
>the magic eye does not. Algorithms to "decode" random
>dot stereograms would be an excellent place to start.
>

i think in random-dot- and in color-pattern-stereograms there
will be many pixels with several possible matches nearby.
so the algorithm would have to compare the results with the
surrounding pixels and choose a similar one.

carsten neubauer


http://www.c14sw.com/

John Chewter

unread,
Sep 22, 2003, 5:27:09 PM9/22/03
to
I still want to see a working practical - no excuses - solution for stereo
vision

I have read many (too many ) papers on this.

Even photo pairs - without artificial targets placed in the field before
taking pics - seems iffy at best.

If a practical usable solution exists........ they are keeping very quiet.

If I missed it would someone clue me in.? I would rather look an idiot for a
month that 10 years

John

"Carsten Neubauer" <cne...@aol.comNOJUNK> wrote in message

news:20030922100723...@mb-m20.aol.com...

Carsten Neubauer

unread,
Sep 26, 2003, 1:55:21 PM9/26/03
to
hi john,

you mention a different problem now.
'stereo vision', 'artificial vision' or 'computer vision'
usually means calculating depth-images from two or
more 'real' 2d-images.
that's another cup of tea and much more difficult than
reverse-rendering magic-eye-images.

several universities all over this planet are working
in this area and some already have working software,
though everything i read about seemed extremely
complex and slow.
you have to find and mark a lot of special features like
edges and corners and search for their positions in the
other images or calculate the 'optical flow' between two
images or something similar...
anyway, you end up searching through images and lists
endless times and if you are lucky, you get a noisy or
blurred image.
not quite the practical and usable solution we all look for...

perhaps we have to wait, until some neuro-science-experts
found out how our brain sees and how we learn to see 3d
in our first weeks.

carsten neubauer


http://www.c14sw.com/

Philip Wickberg

unread,
Sep 27, 2003, 9:49:51 AM9/27/03
to
Hi Carsten ,

I see that you , in the Newsgroups of sci.image.processing, mention
'stereo vision', 'artificial vision' and 'computer vision' !

I have read several papers on the subject and I have (for some time) been
trying to implement a steroeo matching algorithm in MATLAB (for my master
thesis), but I havent succeeded 100% yet. So I'm right now looking for such
an implementation, so that I can compare my own algorithm with it to see
what's wrong.

Do you have a tip, i.e. links to places where they have the code (preferably
MATLAB code) for such an implementation?
I allready know of the evaluation page at middlebury.com/stereo, but that is
only for the results.

\Philip Wickberg (B.Sc.)

"Carsten Neubauer" <cne...@aol.comNOJUNK> skrev i meddelandet
news:20030926135521...@mb-m20.aol.com...

Carsten Neubauer

unread,
Sep 27, 2003, 10:27:27 AM9/27/03
to

Carsten Hammer

unread,
Sep 27, 2003, 5:10:03 PM9/27/03
to

Philip Wickberg schrieb:


> Hi Carsten ,
>
> I see that you , in the Newsgroups of sci.image.processing, mention
> 'stereo vision', 'artificial vision' and 'computer vision' !
>
> I have read several papers on the subject and I have (for some time) been
> trying to implement a steroeo matching algorithm in MATLAB (for my master
> thesis), but I havent succeeded 100% yet. So I'm right now looking for such
> an implementation, so that I can compare my own algorithm with it to see
> what's wrong.
>
> Do you have a tip, i.e. links to places where they have the code (preferably
> MATLAB code) for such an implementation?
> I allready know of the evaluation page at middlebury.com/stereo, but that is
> only for the results.

Hi Philip,
at http://axon.physik.uni-bremen.de/research/stereo/
you find some explanations and examples, unfortunatly the online
processing now is permanently offline.
Best regards,
Carsten Hammer

Carsten Neubauer

unread,
Sep 27, 2003, 6:18:12 PM9/27/03
to
>Hi Philip,
>at http://axon.physik.uni-bremen.de/research/stereo/
>you find some explanations and examples, unfortunatly the online
>processing now is permanently offline.
>Best regards,
>Carsten Hammer
>

hi carsten hammer,
very interesting site, too bad the online-processing is offline!
http://axon.physik.uni-bremen.de/research/stereo/rds/index.html
answers andrews initial question about decoding
magic-eye-stereogramms.

unfortunately all the depth-images are very coarse grained and blurred,
like all other material i've seen before.
first i thought more quality is not possible from low-resolution-images,
but if i am right - please somebody correct me, if it's wrong - and each
backside of our eyes contains about 120.000.000 photoreceptors this
could be 'equal' to images with 4000x3000 pixels in size.
this is not so far above the resolution of usual camera-images, so
resolution should not be the main reason for this low quality.
it still makes me wonder how our stupid little neurons can create such
an exact 3d-impression from so little input.
perhaps there is some preprocessing right in the retina.
for a local group of receptor-cells it should be easyer to detect motion
than for our single-processor-computers.

philip, could you tell us about the processing-steps you do in your
algorithm?

greetings,
carsten neubauer

http://www.c14sw.com/

John Chewter

unread,
Sep 28, 2003, 3:39:14 AM9/28/03
to
I want to visualise 3d - as opposed to making a 3d model....... That is SEE
in 3d from images.

I have done this successfully by making anaglyphs from stereo pairs but I
want to enhance and automate this............

I use a 3D engine (Truevision3D) for other things and am trying to put the
images on 2 meshes so they are perpendicular
to the original cameras planes and the centre of interests coincide -
automatically.
I then can calculate the viewpoint and position my camera accordingly
I can do this by manual placement and trial and error and the results are
better than just combining two 2d images
In this way they should be more accurate and easier to use.

Others have gone this route before me and have mined the road with
unrealistically expensive patents....
These rely on elaborate mechanical positioning devices so that the exact
positions and angles of the subject and cameras are known.

If I put three unique targets in the field of view then I should be able to
find these unique targets and calculate the cameras relative positions and
angles.....

Well that's the theory................
I have some ideas for how to do this but if any body has any code/formulae
etc that might help I would greatly appreciate it.

John Chewter

"Carsten Neubauer" <cne...@aol.comNOJUNK> wrote in message

news:20030927181812...@mb-m20.aol.com...

Philip Wickberg

unread,
Sep 28, 2003, 9:14:49 AM9/28/03
to
Hi Carsten,

Thank you for the feedback!

I'm sorry to notice that the site at
http://robotica.udl.es/links/procesado.htm is down right now, so haven't
been able to visit it.
Could You send me the c-programming links you found?
Perhaps you have some useful MATLAB code that you downloaded your self?
Right now im very keen to find out HOW others have implemented their
algorthm, for stereo matching, using e.g. (preferably) MATLAB.
You aksed me:

> philip, could you tell us about the processing-steps you do in your
algorithm?

Although I have read several papers on the subject the thing I'm trying to
do is at THIS step is very basic and simple. (So that I will get a good
basic feeling)
I compare left image (reference) with the right image. I then simply
generate the one disparity map from the SAD and one from the SSD. That's
all!
Although when I evaluate the images (with the right images size at
middlebury.com/stereo) I keep on getting large errors.

The left and right Images are rectified so that I only have to considder the
scanline.

Thanks, sincerely
\Philip Wickberg (B.Sc.)

"Carsten Neubauer" <cne...@aol.comNOJUNK> skrev i meddelandet

news:20030927102727...@mb-m25.aol.com...

Carsten Neubauer

unread,
Sep 30, 2003, 7:57:03 AM9/30/03
to
hi philip,

so you 'only' have the problem of mapping two scanlines onto
each other precisely.
where you encounter moved and scaled areas, which are quite easy
to detect, but also inserted and deleted portions. not to mention
untextured areas without details.


i did not try reversed rendering myself yet, but i have some
experience in speech recognition, where similar problems
exist (e.g. when comparing slow spoken with fast spoken words,
googling for 'dynamic time warping' might give you new ideas).
i do not know what's possible with MATLAB, i can only tell you
what i would do. maybe it is helpful.


1)
how about choosing some well textured reference-scanlines, with
a known result and then working only on those examples, until the
results become good enough? perhaps you've already done that.

2)
i would start using a difference-table.
lets say both scanlines are 640 pixels wide.
then use an array with 640x640 elements
and fill it like this: (hope you understand some plain C)
for (j = 0; j < 640; j++) //step through left scanline
for (i = 0; i < 640; i++) //step through right scanline
{
diff = LeftScanline[j] - RightScanline[i];
DiffTable[j][i] = abs(diff);
}
where abs(diff) means absolute or unsigned positive value of diff.

now imagine this table as an image with 640x640 pixels, where
black pixels mark low difference and white stands for high difference.
then you can find the depth-contour to your two scanlines as a
line of connected black pixels within some noise, starting
somewhere around pixel(0,0) heading towards pixel(639,639).

in my speech-recognition-software i use this method to compare words.
there i rotate the DiffTable by 45 degrees, apply some horizontal blurring
and search for horizontal black lines to find my matches.

perhaps you can create a difference-table from two typical scanlines
and visualize it to see, if this graphical approach is useful for you.


3)
a different approach is about reducing the resolution by two in several
steps.
lets say you start with 640 pixels again. then, for each scanline, take
the two values p[n] and p[n+1] and store (p[n]+p[n+1])/2 in seperate scanlines,

which are 320 pixels wide. continue downsampling and keep all the results
until you have two lines with five pixels.
lets call it LeftLine5 and RightLine5.
now it should be easy to test if any pixel moved left or right there and to
create
a displacement-map DispMap5 for the motion between the two small scanlines.

now you take this displacement-map and scale it up to ten entries and double
all the values. take the previously calculated LeftLine10 and RightLine10 to
verify
the scaled displacement map. several values have to be corrected by +1 or -1.
then DispMap10 is ready and you start again with upscaling, doubling and
checking with LeftLine20 and RightLine20.

this way you calculate big move-distances first and refine them step by step.
in each step you only to have refine the previously calculated distance by +-1.

this sould limit the amount of search- or compare-operations.

you find lots on this subject with google or http://citeseer.org/, it's often
called
wavelet-based or pyramidal or hierarchical calculation of optical flow.
also MPEG4 might be interesting for you, especially the motion-estimation-part.
ps: the spanish link from my last posting works now. please try again.

0 new messages