Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Notes on VW Conference, SRI, 6/91 [long]

11 views
Skip to first unread message

Sonia Lyris

unread,
Jul 11, 1991, 3:18:35 PM7/11/91
to


Here are my notes on the Virtual Worlds Conference, June 17 and
18, 1991. My task at the conference was to take notes for Teresa
Middleton and I am posting these notes with her kind permission.

Standard disclaimer: I assume responsibility for any errors,
omissions, or obscurities in these notes. I made every effort to
faithfully transcribe the information coming into my ears, but I
don't always get it right, and I might have made mistakes.
Apologies in advance if I did.

Also, Ben Delaney of "CyberEdge Journal" has a report of the
conference, and though I have not seen it yet, I have heard it is
good and quite detailed. Contact him for details.


Virtual Worlds Conference
6/17 & 6/18 1991
SRI, Menlo Park, CA

NOTES ON SESSIONS
=================

(Taken and transcribed by)
Sonia Orin Lyris
s...@lucid.com
work: 415/329-8400 x5559


Plenary Session
===============
6/17 9:00am

Introductions: Teresa Middleton

Welcome: Joseph Eash, Jaron Lanier

Keynote Address: Thomas Sheridan

Topic Talks: John Thomas, Henry Fuchs

Teresa Middleton, Conference Chair, SRI.
----------------
Introductions.

Joseph Eash, Senior Vice President, Engineering, SRI
-----------
Overview of SRI and description of business. SRI is ~3000 people,
50/50 government/commercial, does basic and applied research.

Jaron Lanier, President, VPL Research Inc.
------------
(His first comment was a surprised "This podium was made by
IBM." A later speaker confirmed this, and followed up with
the question: "anyone here know DOS?")
Overview of VPL history. Gloves, helmets, etc. VPL is in
Redwood City, since '85, about 40 people. VPL sells VR systems.
Rely about 98% on components fro VPL. There are about 400 - 500
VPL VR systems in the world.
Definition of VR: "realtime 3-d graphics with head-mounted
display and gloves."
Remark on how quickly VR has been established as a research
field. More than 12 universities are offering VR programs of
study.
For challenges, we need to concentrate on less glamorous
aspects of VR, work on the details, and make the systems useful.
Integrate. This will be more work than we've done before, and
this is the current frontier. We must differentiate technology
from popular perceptions of the field. "It is clear that SRI
will be a major contributor to the field."

Thomas Sheridan, Professor, Engineering and Applied Psychology, MIT.
---------------
Keynote Address: "Virtual Verisimilitude and Recondite Realities:
Forays and Fumbles With The Phenomenon."

(Also is editor & chief of MIT "Presence.")
"VR is a contradiction." Everyone's using different words to talk
about this stuff, and we can have fun with the words. Suggested
"Verisimilitude" instead.
Another word is "recondite"; we are really talking about technology
that deals with people and their senses. Software and hardware-- it
doesn't matter which we're talking about if you produce a "sense" in
the user. This is the "sense of presence" in another location.
Gives overview history of precursors to VR, pictures of helmets.
Remote manipulation in 60's (Ray Gurtz) wa mechanically-driven. Gurtz
developed the "Master-slave manipulator."
Among other things, VR is mapping of one sense onto another (ie:
light onto touch), as for the disabled.
Whatever words we use, the problem we're solving are implimentations
of old ideas, like spoken stories, acting, artwork, writing,
photographs -- it's all a progression. Ways of developing mental
models (refers to Brenda Laurel's new book.) Are computer images
different than other forms? The user's behavioral requirements, as in
other forms, includes imagination, suspended disbelief, and attention.
What are we doing in this area that's different? Objective presence
-- the ability to control something outside of yourself on one end of
the spectrum and the feeling that you are there on the other end. In
between is "functional presence" -- the sense of what goes on at the
end of a tool, as when you are using a machine, or tool (ie: a boat on
the water.)
The determinants of presence:
1) good quality sensory feedback
2) ability to control views of sensory input
3) ability to modify the environment
All three are needed for the feeling of "being there."
Sense of presence is subjective, though reaction to unexpected
stimulus can be measured. Is sense of presence good for
anything? Entertainment, training, task accomplishments? We
don't know. Entertainment has a different criteria.
Afferent/efferent filters; the filters that go to and from the
nervous system in a virtual or actual environment. We still
don't have a good way to talk about these filters, though we can
examine performance.
Discussed a student experiment using glove with vibration for
sense of touch. Results still much better than just vision
(though not as good as the real thin.) More research is needed.
We use models and data to predict the real world all the tie.
We can use that model as an aid to humans, ie: time-delay remove
control of moon-based systems. Prediction leads feedback.
Referred to study that shows this can work. It's basically an
expert system, but is also a virtual environment.
Argo -- a remote undersea vehicle. It's hard to predict the
effect of commands underwater, remotely, so they developed a
predictive model and the performance improved. This is an
intermediate level of VR. There have also been experiments with
touch and no sight.
Q&A
Q: What about filters, does that include the human mind as a filter?
A: We need more objectivity in this business. We want filters to
be quantifiable. Explore the mind as a function of this.
Q: re filters: modulation on afferent side -- sometimes more of one than
the other?
A: Yes, it's a trade-off of afferent/efferent, ie: the system could
have lots of analysis and simple results, or vice versa.
Q: A system has to have not too much nor too little info
A: Yes, issue is tricky -- sensory overload is not a problem,
people can handle, we do this all the time in the world and we
filter fine already.
Q: sense of presence -- some senses are more important than
others. Not just how much, ,but what kind.
A: Of course. There is the issue of appropriate sensors, and
cross-modality.
Q: Presence -- maybe it's not a matter of filters, but effort;
ie: the harder you work the more real it is. Ie: learning and
physical experience are tied.
A: Yes, for example: walking instead of riding. There is
evidence that this is the case.

John Thomas, Director, AI Lab NYNEX Corp, ACM CHI-founder.
-----------
"Human Interface Issues in Virtual Worlds: An Immodest Proposal."

Computer technology today is cheap and pervasive. We have
electronic interfaces, new ones all the time, yet productivity is
not going up. Why? If you look across the nation, you'll see
flashing "12:00" on all the VCRs; maybe this is a clue. UIs are
not designed for end-users.
Thesis: design process guarantees this result. Designers and
users live in different worlds. Good human factors engineering
makes more difference than most people think.
(For example, in 3 mile island, the red lights ON indicated
okay, when lights went OFF, that indicated there was a problem.
They never decided whether operators were to follow procedures or
understand how the system was to work.)
Human Factors must be incorporated at the start of system design.
Which is better for doing etch-a-sketch: knobs or joystick?
Depends on user, task, and context.
Human Factors process -- "psychology is the mother of
invention" -- start from understanding of the problem. VR is
"well-motivated"; people discriminate and remember better with
multi-sensory input.
Speech recognition problem at NYNEX; the system could handle
"Boston", but would choke on "Well, gee, I don't know, I think
it might be Boston."
Different approaches:
1) Understand problems and design
2) design and test and re-design as necessary.
Also can do the "wizard of Oz" technology -- use a human being
on the other end (as in natural language systems) and simulate
the system to determine what direction to go. Of course this is
especially challenging for VR, so try to solve simpler problems
first.
Need experience and theory to guide search because lab testing
takes a long time with VR. Lab testing is not the same as
real-world because users may not be representative. (for
example, a printer that spills the coffee cups on top of it when
you have to open it up.)
3rd approach: participatory design: users help with the design.
Find users with real problems to participate.
We don't know what makes "presence." Fidelity may not be the
only answer. Rather, what is entrancing may not be JUST
presence. (for example, the old non-graphic adventure games were
riveting.)
Immodest proposal: use VR to help us design VR with
participatory design, both for emersion and augmented VR.
VR technology is interdiciplinary. Who owns it? Where does
it get funding? What sort of representations should be used?
Phone/graphics/email -- these are different communication
mediums.
So use this conference; imagine how augmented VR would help
you in this conference.
The problem is that designers don't understand end-users, and
end users don't know the technology. VR will help people
communicate and build diverse worlds.

Q&A
Q: re: metaphors; are there old or new ones that VR facilitates
or demands?
A: There's walking through space.
Q: designers and users; how many of each in the audience?
(Took hand vote. There were more designers than users, about 25
people identified as as designers.)

Human Perceptions issues
=======================
6/17 11:30am

Panel Moderator: Tom Piantanida

Panelists: Wendy Kellog, Dov Adelstein, James Larimer, Beth A. Marcus.

Tom Piantanida, SRI
--------------
"Virtual reality is sensational"

When is reality virtual? When events are unlike what we know
about reality.
Our "Sensorium" is what gathers info -- integrates and detects
variation from physical laws, tells us how things should fit and
look, etc. How great a discrepancy between expectation and
experience can we have and still have VR?
NASA AMES VR worked on visual interface. Wanted affordable
viewer for "common man." Liquid crystal display isn't cheap, and
isn't all that good. Technology not readily available (yet),
especially color, and high resolution displays are expensive.
What displays require:
1) Field of view of observer (varies; ie: depends on nose size)
2) Field of view of display -- want it to be as big as
possible. (It's a trade-off.)
3) View of camera or image-generator.
Many choices made by limitations of technology. We want a
display as big as the user's field-of-view, but then you end up
with coarse pictures.
How the eye works: cones are very dense in the center of
focus, less dense elsewhere. Consequences for resolution: can see
better in center of the field of view. Recent research shows
that this pattern isn't a circle as previously thought (shows
slides and graphic representation of shape of high resolution for
each eye, kind of an oval shape, centered towards the outside of
each eye.)
Even so, people's experience of the world is that everything
is high-res, so there's obviously complex analysis in the brain
making things look uniform. (Slide with concentric circles of
letters smaller in the center. Looking at the center, I could
read all the letters equally well.)
We can look at the average degrees of view for common visual
fields, like glasses, car windshield, terminal screen. Eg: "@" on
a terminal screen is about 27 minutes, which is 54 receptors,
which is 10 pixels. There's sufficient mis-match between eye and
display for human to see the shadow match behind the pixel on the
screen.
There is a field-of-view/resolution trade-off.
In Mac equivalent resolution:
car windshield "display": 1540x500 pixels = 770k pixels
view through (large) eyeglasses: 2800x2400 = 6.72m pixels
The best we can do today, for color, is 300x300. There's
obviously lots of discrepancy.

Beth Marcus, PhD, President and founder of EXOS Inc.
-----------
"How to make Virtual Reality Feel Real."

Add more input devices, or more complex devices to workstations:
1) imaging -- take pictures of human body to see movement
2) gloves -- use fiber-optics and 3d space sensor
3) exoskeletons -- sensors track bones, individual sensors (originally
developed to control robot hand.)
4) articulated devices -- put forces against hand for feedback
(Position in space of device and degrees of freedom are issues.
Can collide with objects, but cannot move hand.)
a) SAFIRE (for NASA)
b) force-feedback joystick (Bell labs) 12-bits resolution in each
direction. Can vibrate.
Research needs:
1) feedback modes
2) feedback intensity
3) grounding -- device follows you around or not.
What kind of feedback does the user need? What intensity? What about
transitions? Feeling textures?
Useful technologies for perceptual experience. We must do the
basic research so we can use these technologies better.

Dov Adelstein, NASA Ames Research Center
-------------
"Design of Kinesthetic Interfaces for Virtual Environments"

Kinesthetic interfaces. Overview of design issues. Gross
physical sense of mechanical dynamics.
"Proprioception": force and kinematics combine to let us
perceive the world.
"Exteroception" -- external
"Haptic" -- tactile
"Kinesthesis" -- non-tactile sense of body's interaction w/
the outside world.
Information flow is mechanical. Impedance governs the
mechanical power flow. Impedance defines mechanical bandwidth,
passband for information flow. Need to shape this interface to
give appropriate feedback. Kinesthetic interfaces are like
feeling the world through a stick. Lots of research to be done.

Wendy Kellog, Thomas Watson Research Center, IBM.
------------
"Usability: On Beyond Perception."

Perception in context. VR should help users to go beyond their
limits.
Lots of attention being put into VRs with perceptional
characteristics. But what do we want them for? We want a sense
of presence, but what for?
Perception with cognition and usage -- when do you want emersion?
There are applications where emersion is not even desirable. We have
to examine what we need before before we look at what we have.
"Market pull" versus "technology push."
There are things that people don't know they want to do with VR
yet. Our perceptual systems are designed for the old world, not
for this new one that is so much more complex. So there's a
mismatch between our minds and current world and its threats. VR
is well-suited to help make these things perceptible. Speed or
size can be understood in VR. So can changing scales.
(For example, world history, which most people think of as
human history, yet if the time span of the world were mapped onto
a year, it would not be until the last minute of December 29th
that human kind even shows up. It is this kind of scale that VR
can help us understand.)
Reads from Dr Zeus' "Beyond Zebra." The old "alphabet" just
isn't enough anymore.

System Architecture
===================
6/17 2:00pm

Moderator: Donald Nielson

Panelists: Michael Moshell, Herbert Taylor, Warren Robinett,
Larry Koved.

Donald Neilson, SRI

Michael Moshell, U of Central Florida
---------------
"Networked Virtual Environments: Issues and Applications."

Military training systems (education), symnet (DARPA), issue
of large, detailed databases. There is the issue of distributed
interaction simulation standards. For realtime simulators, we
need to study human factors, etc. Realtime physics; Linx 2 --
commercial mechanical modelling system.
Distributed Virtual World involves many issues.
Description of SIMNET -- a "dead reckoning" paradigm -- an
extrapolation technique to reduce net traffic. Is very effective. Has
"Ghosts" and "players."
Necessary for distributed virtual worlds:
1) Convivality: anyone can use it. (Want compatibility benchmarks.)
2) Flexibility: move models form one system to the next
3) Performance: realtime physics, multiple levels of fidelity (to
allow for different hardware.) (Other questions arise with rich
databases -- copyright, "fair use", etc.)
Computational strategies to accomplish these goals.
SIMNET -- each simulator has a host computer. No central
arbiter. It's a simple but restrictive system. One processor
per player, with extra processors. Flexible application of
computer power, but adds complexity of allocation. The "goal"
then might be to play "ping pong" across the net. Every
simulator has its own copy of the database. Terrain is
unmodifiable. We might want to change that, but that would tax
net to upload that info.

Herbert Taylor, David Sarnoff Research Center
--------------
"Architectures for Virtual Reality"

We want "the ability to ebe in he wrong place at the right time."
Virtual visualization. Step into data, interact with data.
Like televisualization. We are still struggling with
definitions.
Can't talk about architecture with out talking about data
structures, etc. We need new programming paradigms.
Explosion of size and complexity of scientific data. Need new
ways to see data. 3D-data -- synthetic (like weather) and
sampled (CT, MRI) more common. Real-time processing and lots of
memory. (A gigabyte or so.)
General-purpose vs application-specific computers. Even though
application-specific computers deliver better performance, they're more
expensive. (Cray & TMC intro HPPI -- gigabit I/O -- products.)

Warren Robinate, University of North Carolina
---------------
"Perceiving the Imperceptible"

(Videotape of head-mounted display -- UNC CHapel Hill -- walk
through a protein molecule, test drive through virtual city,
etc.)
VR can be hooked up to the real world, extending perception
system, like physician wearing see-through, head-mounted display.
(Eg: seeing a fetus in a pregnant woman.) "Sensory transducers":
Using imperceptible phenomena, like magnetic fields, atoms,
ultrasound, xrays, radio waves, and events happening too fast to
see.
What do imperceptible things look like? Our choices:
1) arbitrary representation -- invented
2) conventional -- may take tim for conventions to stabilize.
With VR, anything you can detect, record, or remotely-sense
can be experienced with senses.
Q&A
Q: Suggestion: people may interact differently with such devices.
A: Imagine social interactions if we had -- say -- earings that blink
with your heart rate.
Q: Flip side of extending sense, can also take things away to
lessen confusion.
A: That's a filter, not different in kind.
Q: This emphasizes the qualitative, might want more precision
at times.
A: Is view of the world quantitative? Suppose you could
super-impose rulers or some other form of measurement on
everything; that would quantify.

Larry Koved, Thomas Watson Research Center, IBM
-----------
"Architectures for Industrial Strength Virtual Worlds"

There are already misconceptions about VR. (Shows a New Yorker
cartoon.)
Industrial strength architecture:
- Behavior of virtual world
- Virtual world interfacing -- user model
- Collaboration -- participants
Behaviors:
- games -- simple interactions
- walk-throughs -- large databases of information.
(These first two are limited interactions.)
- complex simulation -- superimpose virtual info on real world.
(Showed video of vortex tubes and rubber rocks -- simulates
flexible objects with virtual world interface.)
They use 4'x5' screen instead of eyephones, and this has a
dramatic effect on the presentation. Specialized interfaces, in
collaboration -- goal may be the same but the interface is
different.
Architecture:
* Device servers (UI) -- graphics, etc.
* Application/simulation processes
* Rules: how IO interacts with applications
* Interaction with other virtual worlds
Q&A
Q: How much of this architecture is in place under shown demos?
A: It's there under both demos.

Application Development Toolkits
================================
6/17 4:15pm

Moderator: Chuck Blanchard, VPL Research

Panelists: Randal Walser, Eric Gullichsen

Chuck Blanchard, Director of Software Engineering, VPL
---------------
World development toolkits. VPL Body electric, 4 years ago.
Design goals for Body Electric:
- Fast
- "Grey scale" approach to programming, visual programming language,
easy for both novice and experienced users to use.
- visual/interactive


Eric Gullichsen, founder, Sense8 Corp.
---------------
"WorldToolKit: A Toolkit for Implementing Virtual Worlds on Desktop
Computers."

Systems runs on 486 machine. "I'm going to make two outrageous
claims."
1) Our toolkit is 10x cheaper than anything else.
2) Texture mapping -- polygon rates for rendering -- this is
the wrong path. No polygons -- textures instead. The toolkit
has hybrid-approach for video-realism.
"Each new medium susumes the rest."
Sense8 -- 6 people, Sausalito. Goals to make 3d graphics
affordable, and provide highest quality software.
History of VR: '57 Heilig "Telesphere Mask" (panoramic view
goal), sensorama, NASA AMES. While VR is synonymous with goggles
and gloves, there might be other i/o devices.
WorldToolKit runs on PCs and Sparcstations. It has C libraries,
which are portable, extensible, and object-oriented. Does realtime
rendering, sensor drivers, geometry readers. It is not a modelling
system.
Sensor drivers supported:
- polhemus, ascension
- geoball, spaceball (force and torque)
- powerglove, dataglove
(Shows video.) Without textures world looks cartoonish. The
real world has texture. With textures you can have books with
titles, wood grain, etc. For example, texturing makes trees
obviously trees. Building worlds is like calligraphy. Realtime
texture mapping, easy to create databases.
Complete system less than $20k. Applications: architecture,
engineering, construction, leisure, simulation. Can use camera
to acquire images, also can buy libraries.
Future -- ubiquity -- realtime 3d will be everywhere.
Hybridization -- realism without loosing texture.


Randal Walser, Autodesk
-------------
"Adaptive Cyberspace Programming"

TRIX -- interactive system that extends developer's kit.
Alternative view of software development. Have compilers, cad
systems, OS, etc -- everyone who uses a product uses same tools
and uses same style -- it is fragmentation.
3rd-party developer must join a "camp" and reject all other
camps. But they want to sell to as many people as possible -- to
infiltrate and sell widely. VR and cyberspace are emerging as an
industry. Now let's get "space makers" in business. A
spacemaker builds simulations, makes them seem real.
Actions in physical world map onto simulation and back again.
Cyberspace development includes languages, tools, architecture,
etc.
TRIX addresses languages and tools issues, different
architectures, and languages. Experimentation -- wants
a deep-level language.
Cyberspace industry will be diverse, and will include
different languages, spaces, computer, peripherals, cultures.
Standards -- there will always be alternative camps and
cultures. We want to accommodate this diversity.
TRIX has a different model. In the usual model, there are a
number of "decks" that use a central simulation. With TRIX, they
all use their own simulations, and all the stations exchange
information.
TRIX designer creates "reactors" which map input onto CS
actions.
The CS developers' kit -- create shapes with autocad, create
virtual objects and link them to shapes, link objects into
multi-body systems, create the classes needed for the space,
position the objects, link the sensors to the space, and start
the simulation.
The actual code is very simple, example:
begin
sensors -- sense physical events
simulators -- generate virtual events
effectors -- generate physical events
again;

David Levitt, VPL
------------

On Hookup. Iconic dataflow, use real-world knowledge. Hookup
has 32 icons. Goal to have non-programmers be able to use the
system. Modeless interface -- 3 tools -- hookup, cute and move.
Don't need a manual.

Q&A
Q: Shouldn't we be defining all these rules in VR?
A: Sure, we're working towards that. Tools are building. For now,
tracking and display technology is still shaky.
Q: Which of these products is actually available now?
A: VPL and Sense8.
Q: Can these systems work together?
A: In theory, but it hasn't happened yet. Definitely possible. Need
to experiment.
Q: What about pricing on TRIX?
A: None yet. It's a possible product.
Q: If you're just starting out building worlds, what should you use?
A: Anything. Just build your model and try it out.


Opening Session:
================
Tuesday 6/18 8:30am

Jaron Lanier, VPL
------------
"Report from the Field"

There are about 400 VPL systems in the world. Most in Japan,
next the US, and some in Germany and France. In the 90's VPL
started selling primarily to pioneers and later to actual users.
Getting the user's existing work onto the system is most of the
challenge. (This is true of most information-based systems.)
VPL is collaborator with the customer, not merely the supplier.
Customers: Boeing; Matsushita (VR system that lets Japanese
housewives redecorate their kitchens in VR. Combines cad-cam and
VR and has clear goal.); Brooks Air Force. Often a university
will work with a company and VPL.
How to get data into a VR system? Customers often have data
but not in a compatible form. End up spending time processing
models, so introducing product that does this with the user.
Scientific databases can be very large.
Different types or qualities of VRs (roughly):
- complexity -- frame rate (visual realism: ~12hz)
- responsiveness (training, want faster system, so ~30hz.)
Approximate correllation between price and quality in VR systems.
Entertainment via VR in the future:
- high-quality (not yet) complete with theme park rides and
movies (VPL is working with MCA)
- medium-level, home-entertainment -- still expensive.
- low-end, nintendo-like VR (VPL has a followup to the matel
glasses, "sort-of" eye glasses)
VR systems will sell like computers, but not for another
decade or so. Hardware choices specialized. Most commercial
applications are for specific applications, they are not "pretty
worlds."
(Shows video-demo, including: snake attack; subway (for
Berlin); office floor-plan (architectural walk-through); room
with teapot that you can go inside, with faces on the teapot.)
Visually pretty worlds like these are harder to sell. More common
are simulations that have less visual realism but where other details
are emphasized.

Q&A:
Q: The Nintendo-like VR. Is that for Xmas '92?
A: Possible. Don't know.
Q: Lower cost, high resolution retinal scanner -- when?
A: We have 2 eyephones this year -- HRX (high res eyephone -- 4x res
of original.) Is a function of of cost-demand and supply. Retinal
scanner -- long-term, not anytime soon.
Q: Increased resolution gives what results?
A: Text in display; many applications are verification, so we could
have text on the objects in VR, too. Also in "pretty worlds"
resolution makes a difference. Helps with hand-eye coordination.
Field of view. Less distortion.
Q: Best-guess for color resolution, 2, 5, and 10 years?
A: 2 years, we'll have 2kx2k, it'll be expensive. There will
be a plateau where resolution is "good enough." 2x might be that
plateau -- users will decide.
Q: How will VR be used in education and entertainment?
A: Education and entertainment haven't really started with VR
yet. Not making money yet. Need shared world where people
create the world. That's where the focus needs to be.
Q: What's going on with the Brooks Air Force application?
A: See the publications. Training systems.
Q: And MCA?
A: VR systems ar expensive. With VR, the end presentation is
at least as expensive as the original recording machine, so there
are different economics at work (from TV & movie industry.)
Scripts. Problem is that you have to depend on the person to
decide when the VR experience is over. Probably will have
different types of tickets for quality of involvement in the
system. To solve scripts problem, will use live performers in
the system to control user's timing. It will be a franchise
business.
Q: What time limitations will there be in this system?
A: Don't want to give details.
Q: What about "reality built for two"?
A: That's long term, depends on network communications.
Headmount networks better than other approaches, that's why we
use it. Reality-net project -- fiber lines at a distance. Two
basic types of telecom services: networking groups to share
simulation, and separate, remote users fro main system.
Q: What about standards?
A: We are treating the bottleneck problem as our standard
(importing data.) Industry-wide -- too early. Sharing 3d forms
is difficult (because representations are varied.) Worse is
sharing dynamic forms. We are writing converters and looking at
dynamics.
Q: Telecom: interactivity in hooking up VRs to same system?
A: Now most users are using single-user systems; that probably
will change. For 2 people, you have gloves and goggles, and
shared physical space. Bottleneck is drawing and dynamics
processing. Our initial system supported 2, now maybe would
support 4 or 6.
Q: What about your talking to the senate?
A: Hearing for US senate, science and technology. Theme was
the federal role in promoting new US technology. Fred Brooks,
Tom Ferness also presented. Transcripts have been published.
Q: Networking: what bandwith requirements are there?
A: Depends. We use a central node computing ethernet. Used a
modem another time to synchronize. We don't know yet, and are
exploring this. Think we'll have to settle on a standard soon.
Probably short of full video, but close.
Q: What about video and generated reality combined?
A: We've done some. "Video-sphere." We could project a face
onto a face, but our customers don't need it. Eventually we will
have VR camera, but it will be complex. 5 years, maybe.
Q: What about body-suits and physical UIs?
A: VPL data suit -- unwieldy compared with gloves and goggles.
Body-suit will help in entertainment. Esoteric. Predict new
devices that include feedback. Important area. Ours are
unannounced as of yet.


VR For People with Disabilities
===============================
6/18 10:00

Moderator: Harry Murphy

Panelists: Jane Hauser, Walter Greenleaf, Neil Scott, Deborah
Gilden, Hugh Lusted & R. Benjamin Knapp.


Jane Hauser, US Department of Education
-----------
"From Computers to Virtual Reality: A World of change for Persons
with Disabilities"

Mostly funded simulation (software and video tapes.) Business
and military applications have sophisticated systems, do
training, useful for education.
Issues in transferring technology from military and business
to education:
- money -- need to be able to buy the technology
- flexibility of the technology -- can it be applied
- organizational dynamics: how it is introduced into the school
- training: how much is needed
SRI project investigated these issues.
User is critical in education -- danger is that technology is
used for it's own sake and isn't really needed. Investigated
needs and technology "cluster" solutions. Augmented by case
studies. SRI developed 10 scenarios, analysed necessary funds
and areas of research.
VR potential for giving feeling of power over world to
disabled. Girl in wheelchair can play hide and seek in VR, can
play ball, whatever. We see these children as "going somewhere"
with appropriate aids. VR may help.
Q&A:
Q: Private sector funding?
A: No. DS does own development. "Education 2000" emphasizes
collaboration.


Walter Greenleaf, Greenleaf Medical Systems
----------------
"Dataglove & Datasuit for the Medical Market"

Used to be neurobiology student, then started Greenleaf using
VR spin-offs. Dataglove and Datasuit. Migrate from VR to
medical community. Custom-made. Want to make commercially
available.
Small group in PA -- looking for colarboration, alliances with
other organizations. Dataglove: fiberoptic sensors on hand and
fingers. Polhemous magnetic sensor measures absolute location of
glove.
(video: data suit program.)
Numerical analysis of how body moves in space.
3 product lines:
1) motion analysis system -- clinical medicine, ergonomic
analysis, rehab.
2) gesture control system -- input for physically impaired
3) glove talker (implementation of #2), maps gesture to voice
synthesis. Used by Loma Linda Dept of Neurology.
Q&A:
Q: Glove-talker uses sign language?
A: Finger spelling, not sign, because sign is more complex. If
someone already knows sign then they don't need this. Or for
communicating with speakers.
Q: what about foreign language?
A: Sure, could use for that.
Q: For something like cerebral palsey, how would you filter
unintended movements?
A: Use second hand for validating signal, and filter.


Neil Scott, Cal State Northridge
----------
"Virtual REality and Persons with Disabilities"

Find practical ways to take technology and apply to group.
Have disabled on campus and about 40 computers. Each computer has
a special attribute. Learning-disabled especially applicable.
The world is unreachable to the disabled. They don't
experience things we take for granted, like playing with blocks, or
spacial concepts. Blind navigating.
Graphical UIs on computers -- blind cannot use these. What about
audio images?? Simplistic: attach verbal info to all icons, but hooks
for this are usually not present in the software. Soundspaces. Still
expensive. General mapping to sound would be ideal.
Input from users generally at very low bandwidth. Dataglove
might add subtlety. Use pizo-electric sensors for eyebrow
movements -- still mostly on/off sensors.
Want to employ disabled students, and encourage them in math and
science.
"Universal access system" -- standard for infra-red
interaction with any computer.
Q&A
Q: is VR an "equalizer"?
A: Well, it might give disabled advantages beyond equality.

Deborah Gilden, Smith-Kettlewell
--------------
"To see or Not to See: Blind Man's Bluff in the World of Technology"

VR isn't magic, but it might look like it. How VR enables
disabled to do and experience new things, like walking, flying,
etc.
Goggles don't help for blindness, so need to use sensory aids.
What can VR do?
Auditory: "eary" feeling. Early 40's interviewed blind
walkers, who said they could "feel" objects on their face
("facial vision"). This is really an auditory phenomena.
"Echo-location" and sound-shadowing.
Late '70s, using dummy head with earphones, fed into stereotape
recorder, this is an auditory VR. Could practice things like
escape routes, or have fun of a trip through the jungle.
"Brain plasticity" -- mapping visible light onto something
else, like skin. Developed TVSS -- Tactile Vision Substitution
system, map 20x20 "tactors" onto skin of back of user. Users can
"see" perspectives and movement, even flames.
Computer screen to blind person -- text info. Usually use 1
line of braille at a time. New system uses a roller-slot
"window" of braille, and it was 75% faster than the previous
system.
Give non-disabled the experience of being disabled so that we
can figure out what the disabled need. Also for empathy.

Hugh Lusted (Stanford) & R. Benjamin Knapp, PH.D
----------- -----------------
"Nervous System/Computer Interface: New Controllers for Computers"

History: biomed switch -- takes bio signals, outputs MIDI
code. '89 biocontrol systems company. "Eyeconer" product.
Inputs: brain waves (EEG), eye movements (EOG), muscle
movement (EMG), audio.
Outputs: pitch, velocity, timbre, rhythm, others.
User doesn't need to actually move, just to use the muscles.
Can also use EEG as a switch -- to use blinking, or even
recognize sleep.
(Shows tape demo, news clip fro KRON -- "Biomuse". Easy to
use, simple mapping of signals. "Air violin" news-cast video:
shows a disabled man playing violin in the air with the minimal
physical motion he was capable of. Also showed an eye-cursor:
three-year old unable to move anything but her eyes moving a
cursor around the screen with just her eyes.)
Q&A
Q: How repeatable are the sensor motions detected?
A: Very much so. Muscle sensors can vary, and can mount sensors
on braces as well.
Q: How long can people wear these sensors?
A: As long as they like. No problems with comfort.


Telepresence and Teleoperations
===============================
6/18 11:30am

Moderator: Philip Green

Panelists: Richard Satava, Rodger Cliff

Rodger Cliff, Lockheed Industries
------------
Task: locate and pick up box entirely in VR. Video of robot
arm controlled by operator. Robot arm probes box to find
location. The VR shows as much of the box as it is sure of and
also shows uncertainty. 3-d simulation has effect of letting
operator move a camera around the workspace. Lockheed found VR
is more effective than using the actual video of the workspace.
Q&A
Q: Red shadow shows intended location of robot arm. Why is the
arm not where intended?
A: Time delay. (Could also be physical limitations.)
Q: Is there any force feedback on the arm?
A: Not yet, but is intended for future.
Q: Have done any experiments with time-lag?
A: Not really, only those inherent in the system.
Q: Quantification on task success?
A: Not yet, but collecting info.


Phillip Green, SRI
"Tele-operation in a virtual workspace"

Goal: to do telepresence surgery.
- Tele-operation -- tele-robotics, TV control, human control.
- Tele-presence -- operator feels they are there.
A small piece of remote real world to manipulate. Current
system: 4 degrees-of-freedom manipulators, stereo-graphic video
cameras, 1-to-1 mapping of forces. Future: miniature
manipulation, a second hand, tactile sensing.
(Video of remote surgery system: remote slicing a grape into very
thin slices. The grape is held by a human hand and the slicer is
working remotely.)
Telepresence can enhance dexterity and hand-eye coordination.
Q&A
Q: How to provide tactile or force-feedback?
A: Force-reflecting servos.
Q: I object to the terms "telerobot" and "teleoperator" as synonyms.
The first is a subset of the second.
A: Thank you.
Q: How is the bandwidth on force-feedback?
A: It is adequate.

Richard Satava, U.S. Army, MD FACS
--------------
"Telepresence Surgery"

Goal is to have less invasive surgery. Could project surgeon
"self" into abdomen, for example.
History: endoscopic -- fiberoptic to video (50's)
lapropscopic -- video-chip at end of endoscope. But restricts
surgeon to using a single hand.
(Showed video of actual procedure. Inserts camera and
instruments through small hole, like a 10 mm opening. Jerky
motions due to lack of depth perception. They inflate the
abdomen, and remove the gall bladder by sucking it up through a
small straw through the opening.)
The advantage is much quicker recovery. Thus people can
return to work sooner.
Laproscopic surgery uses 2 surgeons at once.
With telepresence:
- surgeon works at the workstation in 3-d workspace
- exploring the usefulness of sound feedback as an enhancement
- could use force feedback instruments
- have heads-up display for general information like oxygen,
blood pressure, etc.
Did 1 week's worth of experimenting with their telepresence
surgery system and found that the accuracy is greater than
without, and could be further improved. The accuracy is
"logorithmically better." Conclude that the technology is mature
enough.
Some questions about the technology: is it mature? Is it
necessary? Will users accept it? Is it cost effective?
The technology is mature; components are available. Users are
accepting the technology already; the basic leap of faith was for
surgeons was to take their hands off the patient and use the
remote system instead, and they have already done this. Is there
a need; yes. Is it cost effective; we don't know yet, but sure
that it will be in time.
What next? Refine the system and try with multiple users.
Ideal is to have expert assistance remotely, eg: the space
station.
Q&A:
Q: What about training applications and simulations (with no body)?
A: Yes, can tie the system to VR for training. Users might not even
be able to tell the difference between simulation and the real thing.
This is current technology.
Q: Seems to require very precise motions. Can you try to
change the scale?
A: Yes. This is pending.
Q: Can do diagnosis of internal organs, ie: tissue sampling?
A: Yes. It's only a question of how invasive the procedure is.


Joseph Rosen, Stanford University
------------
"Surgical Simulation -- Past, Present and Future"

Surgical simulation has been going on for 3000 years.
CT-scan-model workstation, already do this. Next step is to use VR.
Goal: surgeon with goggles using cadaver with simulated blood, etc.
Analogous system is flight-simulator.
Telesurgury. (Anasthesiologist still needs to be actually present
"for billing purposes.")
Simulation advantages: decrease cost, risk,
training-improvement. How well does simulation match reality?
As in flight simulation -- reduces learning curve but doesn't
improve skill in general. Allows you to practice highly-unlikely
events. Allows "freeze and replay." Allows better testing of
medical persons.
(Showed video of simulation skin and muscles for plastic surgery.)
Q&A
Q: How much computer power is needed?
A: Depends on how many polygons you need. SGI + 2000 polygons allows
real-time, 30 frames/second. But what do you want to see in realtime?
Some things you can see in realtime with relatively little computing
power, and other things take more power.

Future Issues
=============
6/18 2:00pm

Moderator: Teresa Middleton

Panelists: Thomas Sheridan, Joanna Alexander, Warren Robinette, John
Thomas

(Round-robin discussion, too quick to distinguish individual
speakers.)

Technology push versus market pull -- do people really need VR?
Hard for people to want what they don't understand. Are the needs real
or not?
Is VR a solution looking for a problem? No one's paying for VR yet
because the technology is expensive. And yet 12-year-olds want it.
But scientific visualization is a real need, as is education. We
need to go beyond the "gee wiz" demos, and show demos in context.
We are at an early stage in the technology. Many advances are
needed.
Products get out at first, even if they aren't initially useful.
This is stage one. So what should we get "out there"?
There are government contracts. What about commercial ones?
There are medical applications; there's both a need and
willingness to go in that direction. There's entertainment --
arcades? And there are "intuition builders" for math and
science.
There's a draw-back: the lack of tactile sense.
"Virtuality" has "air-glove" so that you can get some physical
feedback.
What about the sense of presence? It depends on the senses,
but what if it's not a photo-realistic VR? Is it still useful?
Example -- show artwork in its original setting in VR, and the
sense of presence of being there (wherever the artwork was
originally placed) would enhance the experience of the artwork.
Maybe there should be a transitory ceremony when you go into
VR.
Cartoonists can capture a sense of reality in just a few lines
-- do we need to do more than that in VR? We're "hung up on
polygons." Yet we don't know how to simulate what cartoonists
do.
Separate effect from reality in VR, helpful in child
development.
Intensity -- measure adrenaline in video games. Important
aspect. Must include subtlety as well as intensity. "Presence"
will be measured in many dimensions.
Can we trick the body into producing illusion?
VR as a medium for communication. Surreality? Abstraction?
More modalities at lower resolution may give more realism.
There's lots of work being done in visual and aural areas, but
little in tactile areas. Hot/cold, smell and taste are not done
as much, either. We need smell recorders, players and
amplifiers.
Studies show that children explore as long as there is still
detail to be discovered.
Structure of the technology is intrinsically for white males.
What about affirmative action?
Have girls create the VR worlds. Girls create different kinds
of worlds.
Last sigCHI was about cultural diversity in interface design
-- are we engendering bias?
We each carry baggage and have unique perspectives. Most of
us are white and male. We can't avoid influencing our work.
"Men are visual, women are tactile." (Objections to this.)
Technology may decrease alienation, not increase. Building on
your own interests is the most important thing. Rather not
categorize.
Seems like a non-issue -- people are generally the same in
their perceptions of 3-d.
"VR success depends on how well it mirrors society, warts and all."
"But media changes society."
"A word processor does not have a sexual bias."
Body language makes it through the VR filter.
Bottleneck is that of building models for VR. Auto-generating
models? Shared models-- how about a 900-model number? The
market will generate this as needed. But the bottom line is that
models are hard to make.
Fusion of TV and VR. But VR is more like the phone than TV.
(From man in audience with German accent): I'm from Germany.
All of this sounds like marketing. In Europe the media has been
describing VR a lot. In Germany, VR technology has much less
hardware to work with than here. We are waiting on the
technology. It's a waste of time to worry about special issues
right now. Let's just get on with it.

[end]

0 new messages