History of AR/VR Programming Environments? [was Re: Personal Programming Env...]

32 views
Skip to first unread message

David Barbour

unread,
Sep 25, 2013, 1:22:52 PM9/25/13
to Fundamentals of New Computing, augmented-...@googlegroups.com
I would also be interested in a history for this subject. 

I've read a few papers on the subject of VR programming. Well, I remember the act of reading them, but I can't recall their subjects or authors or being very impressed with them in PL terms. 

Does anyone else have links?


On Wed, Sep 25, 2013 at 2:43 AM, Jb Labrune <lab...@media.mit.edu> wrote:

oh! and since i post on fonc today, i would like to say that i'm very intrigued by the notion of AR programming (meaning programming in an actual VR/AR/MR environment) discussed in the recent mesh of emails. I would love to see references or historical notes on who/what/where was done on this topic. I mean, did Ivan Sutherland used its HMD system to program the specifications (EDM) & code of its own hardware ? did supercockpit VRD (virtual retinal display) sytem had a multimodal situational awareness (SA) real-time (RT) integrated development environment (IDE) to program directly with gaze & neuronal activity ? :)))


David Barbour

unread,
Sep 25, 2013, 3:05:53 PM9/25/13
to danm, Fundamentals of New Computing, augmented-...@googlegroups.com
As I said below, this is no longer part of my 'recall' memory. It's been many years since I looked at the existing research on the subject, and I've lost most of my old links.

A few related things I did find:


Landay I had looked up regarding non-speech voice control (apparently, it's 150% faster), and I recall some of the experiments being similar to programming. I've never actually read Blair's paper, just 'saved it for later' then forgot about it. It looks fascinating.

VRML sucks. X3D sucks only marginally less. 

If your interest is representation of structure, I suggest abandoning any fixed-form meshes and focusing on procedural generation. Procedurally generated scenegraphs - where 'nodes' can track rough size, occlusion, and rough brightness/color properties (to minimize pop-in) - can be vastly more efficient, reactive, interactive, have finer 'level of detail' steps. (Voxels are also interactive, but have a relatively high memory overhead, and they're ugly.) Most importantly, PG content can also be 'adaptive' - i.e. pieces of art that partially cooperate with their context to fit themselves in. 

If I ever get back to this subject in earnest, I'll certainly be pursuing a few hypotheses that I haven't found opportunity to test:


But even if those don't work out, the procedural generation communities have a lot of useful stuff to say on the subject of VR.

I haven't paid attention to VWF. If you haven't done so, you should look into Croquet and OpenCobalt. 

Best,

Dave




On Wed, Sep 25, 2013 at 10:30 AM, danm <da...@zen3d.com> wrote:
Hi David,

Moving this outside the FONC universe, although your response might also be of interest to other FONCers.

Can you share with me your findings on VR programming? I'm aware of VRML and X3D (and its related tech.) as well as VWF (Virtual Worlds Framework), but I'm always interested in expanding my horizons, since this topic is near and dear to my heart.

Thanks.

cheers, danm
_______________________________________________
fonc mailing list
fo...@vpri.org
http://vpri.org/mailman/listinfo/fonc



Reply all
Reply to author
Forward
0 new messages