Cameras and recording (long)

1 view
Skip to first unread message

John Bytheway

unread,
Apr 12, 2001, 8:22:09 AM4/12/01
to
Stop me if this has been discussed before, but I seem to remember someone
mentioning cameras in a recent thread. I've been thinking about the
possibilities and problems thereof, and would like to throw a few ideas
around. Also note that I'm working in Glulx, so implementation discussion
will probably apply to Inform/Glulx more than TADS.

Clearly, cameras have great potential for puzzles if implemented well. You
can watch the consequenses of your actions in another location, you can see
what happens after you leave, you can lower the camera down a well on a rope
to see what's below, you can record in the infra-red or ultra-violet spectra
and thus see in the dark without frightening away nocturnal creatures.

But, the problem here is how to implement it well. There appear to be two
broad types of camera (although some cameras do both), the real-time
playback kind like CCTV cameras where you can sit in the security office and
watch all the goings-on in the building as they happen and the
delayed-action playback kind like most home-video camcorders.

The first kind shouldn't be too hard to implement. Make the camera itself a
container and when the player examines the monitor move them into the
container, e.g.:

Security Office
A monitor displays the activity in the lobby.

>x monitor

Lobby (seen through the monitor)
The spacious lobby is well presented.

You can see a sack here.

>x sack

The sack sports a large dollar sign.
A man wearing a balaclava enters the room.

>x man

He's the thief alright.

>out

You extract your awareness from the monitor.
Security Office

>n

Lobby

You've caught the thief red-handed. Well done!

The thief will only enter the lobby after checking that the player is not
there. Of course, he does not check in the CCTV camera in the lobby. Care
would have to be taken if the player might instead have hidden in some other
container in which the thief might notice him, like a large glass box. It
might, if many such possibilities presented themselves, be necessary to
change the player object to a PlayersAwareness object which is then moved to
the appropriate camera. In this way, other objects can make the distiction
between the prescence of the player in body and in mind. Whats more, the
players real body is still back in the security office, so other nasty
things can happen to him, and he can see himself if he looks at a monitor
connected to a camera looking at himself, whareas before he would have
mysteriously dissapeared.

An ambitious project might be to allow the player to see both what is going
on around him as well as what is through the camera by opening another
window (Glulx only).

The main problems here are scope-related - whether any visual action (e.g.
look, examine) should be allowed through the camera just as if the player
was there. Some descriptions ussume toch, but I suppose the onus in on the
programmer to remember to check ObjectIsUntouchable().

Delayed-playback cameras present a whole new problem, however. If the
player is carrying the camera around then it should be possible to simply
(!) copy everything which is printed to the screen into a huge (~1000
elements, I suppose) byte-array. This would then be printed back out when
the player watches the video. This should probably be possible both on the
camera and on any video/tv setups. The counter on the playback device can
conviniently be directly assosiated to the number of the entry in the byte
array, allowing 'fast forward to counter 350' and 'rewind to counter 0'. If
we want to allow the other kind of fast-forwarding where you watch it as it
happens, perhaps this could be done by printing out only that text which was
in the emphasised style before, so we get all the room names and anything
else of note. This means, of course that it will be necessary to remember
not only what was recorded but in what style (we may have wanted this
anyway) This would probably be easiest using somethig along the lines of the
OKBStyle.h library just published.

That's all very well, but what if the player leaves the camera behind,
recording. Now we have an entirely new problem, how to work out what the
camera is seeing. A daemon on the camera object can watch its location and
print any change in description (allowing for such things as a camera
carried around by an NPC or being lowered on a rope, or being throw off a
cliff, or whatever). However, there should also be the text printed by
daemons, timers and each_turn routines. I think the easiest way to allow
for this is to introduce a 'recorder' attribute and a 'record' property.

Daemons, timers and each_turns check for the prescence of all objects with
the recorder attribute instead of just checking for the player, as is
normally done. And, they don't print out anything themselves, but instead
call the record property with the appropriate string as an attribute. The
place in the library which calls all the each_turn routines would have to be
modified to run all those in scope of a recorder, and provide the identity
of the recording object either as an argument or a global variable. This
would mean that some each_turns would be called more than once, if more than
one recorder is in scope.

For the player object, the record property would simply be a routine to
print out the string passed, but for the camera object it would write the
string to the video byte array. When the video runs out, or the stop button
is pressed simply remove the recorder attribute and the camera is an
ordinary object again.

But this is not the end of the story. Most NPCs have daemon or each_turn
routines which display their activities as they wander around and act, and
the camera would dutifully record these messages. However, the player is
much more silent, and the camera will notice none of his actions without
huge modification to all the library messages as well as every single
before, after, react_before, react_after routine. This seems like a bit too
much work. Are there any other ideas?

The last issue I can think of is that of descriptons such as:

"You are standing in..."

Which is fine for your vision, but looks totally wrong for the playback from
a camera which was left behind sitting on the floor. so that global
variable recording_object (possibly shortened to r_obj) comes into its own,
changing the above to:

print_ret (The) r_obj, " ", (isorare) r_obj, " ", (poseof) r_obj, " in..."

Where poseof looks something like:

[ poseof x;
if (x provides pose)
x.pose();
"standing";
];

And of course, the cameras pose routine prints "standing on its tripod" or
whatever is appropriate. The players pose routine might check for current
staus and print out anything from "sitting" to "doing a handstand" a bit of
randomness might even add to the interest of room descriptions some.

I haven't even touched on recording devices for anything other that vision,
like a dictaphone. Or, what about a mega-recorder which you can play back
in a 'holodeck' or a virtual reality machine where you can not only see and
hear but also smell, touch and taste. Then there's recorders for (as I
raised earlier) infra-red and ultra-violet, as well as other regions of the
electromegnetic spectrum, and these need a different definition of what a
light source is. Or recorders for telepathic emissions or hyperspace
modulation or anything which the author introduces to their world. Now we
need attributes for all the possible sensory inputs, rather than just one
bland 'recorder' attribute, and every single routine which prints anything
must check each and every one. Enough to crush the simulationist in all of
us? I hope not. Enough to require rewriting the library entirely?
Possibly.

I've raised a lot of points in this post, and I hope they all stimulate some
response.

Yours cogitatively,

John Bytheway


Reply all
Reply to author
Forward
0 new messages