I built a live cybersecurity visualizer using JMonkey. <a
href="
http://code.google.com/p/3sh3ll/">the 3sh3ll project</a>
JMonkey is pretty nice... tempts me to work on a Java MXP client.
I've been thinking about what would really raise the bar for virtual
worlds, and I think what is needed is realtime generation of the face
mesh of the avatar, using Kinect or similar. This is being worked on
by people, I know, but imagine what will be needed in order to take
these "mesh streams" and actually put them on avatars in the virtual
world. Seems like a need for MXP. Each participant in the world will
be sending a stream of mesh diffs to animate the face of their
avatar. Can the quality of this reach a point where videoconferencing
is displaced by virtual world technology?
Ark