-svip
Chris Flanagan skrev i meddelelsen ...
There were two versions of infravision in my 2e DMG. One
allowed a character to see in the dark; the other allowed a
character to see heat in the dark. I think the second
version was used more often than the first, so I assume you're referring
to it.
Heat vision doesn't seem to make much sense for dwarves, who are
supposed to be able to see in the dark. Dwarves should be able to see
where a cavern drops off or branches off, and should also be able to
read in the dark; however, cave air and the surrounding stone aren't of
appreciably different temperature, so the average dwarf with heat vision
wouldn't see much in a cave. Reading is also a problem because the ink
on the page, if it's ordinary ink, would be the same temperature as the
page itself.
In 3e, we now have three types of vision. I'll see if I can remember
which races get which, as I go.
Normal vision: humans, halflings.
Low-light vision (superior to normal vision in torchlight & moonlight):
gnomes, elves, half-elves.
Darkvision (ability to see in total darkness, limited to black & white):
dwarves, half-orcs.
This gives dwarves the ability to see in total darkness while allowing
elves to see very well (and in color) in near darkness.
It also eliminates the need for dwarves to develop special marking
systems for their caves and special inks for their maps and books.
For more information, you might want to do a Deja search on Vision or
Darkvision; I think this came up a while back.
--
Help end hunger with a single click:
visit www.thehungersite.com
Sent via Deja.com http://www.deja.com/
Before you buy.
>Why has Infra vision been replaced by dark vision ? I know that some people
>had trouble with infra vision but i thought it was great . why change it ?
>
infravsion took a little thinking; dark vision is something easier for
those weaned on CRPG's to understand.
>
>
>
incrdbil wrote in message <39d34f11...@usenet.flinthills.com>...
Chris Flanagan wrote:
>
> Why has Infra vision been replaced by dark vision ? I know that some people
> had trouble with infra vision but i thought it was great . why change it ?
Perhaps it's not very useful.
In a cave, for instance, the walls would have a temperature close
to that of the air, so you wouldn't be able to see the walls at
all.
Also, I believe there has been many disputes and discussions
about what infravision can and can't detect.
Perhaps most roleplayers are unfamiliar with simple physics?
I don't know... I believe I could GM for a character who had
infrared vision, but I'd need to do a little bit of research
and thinking first.
--
Peter Knutsen
"soup cleric" <soup_...@my-deja.com> wrote in message
news:8qvjn1$265$1...@nnrp1.deja.com...
> In article <atHA5.4145$Bw1....@news.indigo.ie>,
> "Chris Flanagan" <kif...@indigo.ie> wrote:
> > Why has Infra vision been replaced by dark vision ? I know that some
> people
> > had trouble with infra vision but i thought it was great . why change
> it ?
> >
Computer Roleplaying Games.
Infravision never was a hard thing for my group to fathom. Heat
signatures and infrared radiation from any living creature with a body
temperature over 40 degrees or so.
Undead and non-living creatures radiate no heat, aa their body
temperatures are ambient with the surrounding terrain, so they will be
spotted only by their movements with infravision IMC, as a darker shape
moving amidst a light gray background. Remember when Arnold covered
himself in mud to prevent the Predator from seeing him? He effectively
(and questionably, but still...) blocked his heat signature from the
creature, and the only thing the Predator could see were movement of the
surrounding brush and foliage. This is pretty much how I run infravision
IMC. It may not be the canonical method, or logical, but what do I care,
especially if my methods work? ;)
--
Long live 2e.
I don't understand the sentiment that this (among other) issues has been
"dumbed down". After reading the PHB and DMG, that's certainly not the
conclusion I drew. 3e is clear, concise, and consistent. From here on I dub
it "3c". :-)
Back to infra vs. dark vision...I think sometimes infravision discussions would
devolve into physics debates, or pointless arguments about carrying a torch or
not. In short, I found these diversions disruptive and a little too "real
world" for my fantasy game. Darkvision is cut and dry, has the necessary
flavor, and allows my game to go on without giving the issue more thought than
it deserves.
================
Scott M. Alvarado
SMAlv...@aol.com
================
Note also that amount of IR emission is NOT simply a function
of temperature, but also of material and surface properties.
That is, stone and wood at the same temperature would not
be emitting the same amounts of IR radiation and thus would be
easily distinguishable from each other. A Mirror will not
emit IR radiation *at all* from its reflecting surface so
would be "black", however, it *will* reflect any IR that hits it
in the same way it reflects visible light.
> Undead and non-living creatures radiate no heat, aa their body
But they will *reflect* IR. And a person acts as their own
IR "torch"...
> temperatures are ambient with the surrounding terrain, so they will be
> spotted only by their movements with infravision IMC, as a darker shape
Nope. They will reflect IR emitted by those around them.
> moving amidst a light gray background. Remember when Arnold covered
> himself in mud to prevent the Predator from seeing him? He effectively
> (and questionably, but still...) blocked his heat signature from the
> creature, and the only thing the Predator could see were movement of the
This would work, so long as the mud were not a good heat conductor.
It is a different case than undead above.
> surrounding brush and foliage. This is pretty much how I run infravision
> IMC. It may not be the canonical method, or logical, but what do I care,
> especially if my methods work? ;)
But is *is* more complex than just saying "darksight" and "nightsight".
I used to run it this way, too. However, I didn't let elves have
infravision. I gave them the same biological optical enhancement
as cats and other nocturnal animals.
--
David R. Klassen voice: 856-256-4500 x3273
Department of Chemistry & Physics fax: 856-256-4478
Rowan University
201 Mullica Hill Road kla...@rowan.edu
Glassboro, NJ 08028 http://elvis.rowan.edu/~klassen
Also, air is NOT a dense solid and thus does NOT emit according
to the Steffan-Boltzman Law that governs how the walls are
emitting. That is, even in IR you don't "see" air.
It only raises the overall background of IR photons around.
> Also, I believe there has been many disputes and discussions
> about what infravision can and can't detect.
Many.
> Perhaps most roleplayers are unfamiliar with simple physics?
Many.
> I don't know... I believe I could GM for a character who had
> infrared vision, but I'd need to do a little bit of research
> and thinking first.
I know *I* did...
> I used to run it this way, too. However, I didn't let elves have
> infravision. I gave them the same biological optical enhancement
> as cats and other nocturnal animals.
Thanks for the fine forensic evaluation, and physics lesson, but I think
I'll stick to the system I have in place. :)
Semantics, as far as the actual wording is concerned. Whether I call it
infravision, darkvision, night sight, or anything else will still work
the way I envision it working. I'm not thrilled about multiple vision
types either, other than normal vision and infravision. X-ray vision is
possible, but only from magical items IMC.
--
Long live 2e.
Not always. IR reflectivity often differs markedly from
visible reflectivity. There are "cold mirrors" which
reflect light but not IR (and conversely "hot mirrors"
which reflect IR but not visible).
> > Undead and non-living creatures radiate no heat, aa their body
> But they will *reflect* IR. And a person acts as their own
> IR "torch"...
>
> > temperatures are ambient with the surrounding terrain, so they will be
> > spotted only by their movements with infravision IMC, as a darker shape
> Nope. They will reflect IR emitted by those around them.
But the problem is really the contrast. The 300 K ambient
source (zombie & dungeon) peaks at around 9.65 um while the
310 K source (PC) peaks at 9.34 um and the Stefan-Boltzmann
law shows that the PC's surface only emits 14% more power
over the whole spectrum, and that has to go into a lot of
solid angle. It amounts to 64 W/m^2 difference but against
a background that's emitting at 460 W/m^2. The big unknown
here is the photometric response of infravision. I'm willing
to say that Lumens/Watt, which maxes at about 680 Lumens/Watt
for the peak of photopic response (550 nm - scotopic response
probably wouldn't be very good analogy, since it occurs with
low scene illumination) will go down linearly as the wavelength
increases, just from a sort of quantum yield argument, so
taking the wavelength to be 9.5 um in some mean sense, the
response would only be about 36 Lumens/Watt.
Generously saying that that whole 14% lies in the IR air
window (which it obviously doesn't) and the the human is a
perfect blackbody (probably 5X too high), we can figure the
total "excess" light the PC is shedding. The (human) PC is
about like a cylinder 40 cm in diameter and 2 m tall,
conveniently emitting uniformly in the vertical plane, if we
neglect the "end effect" of the cylinder. His/her surface
area of 0.25 m^2 then emits 16 "excess" Watts, or 16*36=576
Lumens, about 1/3 of a 100 W light bulb (say 35 W).
So you have an area which is filled with 240 W light bulbs
(the background) and you are a lamp with the total effective
light output of a 275 Watt bulb, only 6 ft. tall. Can you see
a perfectly reflecting zombie ten ft. away? It is itself made
out of those same 240 W bulbs, so, squinting, you can only
pick it out by the contrast between its output + its
reflection of your output vs. the background's output + the
background's reflection of your output. At 10 ft. (~3 m) it
gathers about 1/25 of your light output, so if it reflected
all of that, it would look like (more or less) a bunch of 240
+ 35/25 = 241.5 W light bulbs standing against that
background of 240 W light bulbs, which are probably far
enough away that they don't return your own emission
significantly, but if they do, it just cuts the zombie's
contrast further.
Probably, the photometric response would be tuned down
just due to the overabundance of IR photons, but still
the contrast is going to be miniscule.
Nah, not complicated at all :-).
Ben B.
On Thu, 28 Sep 2000 14:12:24 +0100, in my rec.games.frp.dnd coffee mug, which
was quite moldy, Chris Flanagan, a dying weevil, wrote the following with his
antennae:
:) >Why has Infra vision been replaced by dark vision ? I know that some people
:) >had trouble with infra vision but i thought it was great . why change it ?
:) >
:) >
:) >
------------------
Cerberus AOD / A Paper Cut (ernieSCR...@DoddsTech.com)
ICQ UIN: 8878412 (take out SCREWTHESPAM to mail me, okay?)
"Children of tomorrow live in the tears that fall today"
-Children of the Grave, Black Sabbath
> > Nope. They will reflect IR emitted by those around them.
> But the problem is really the contrast. The 300 K ambient
And all that's really needed is to posit some function
that can subtract the nearly constant ambient background
from the signal. Since the eyes jitter anyway at high
frequency all we really need is some kind of median filtering
and subtracting "circuitry". Now how one does this
biologically... :)
> a background that's emitting at 460 W/m^2. The big unknown
> here is the photometric response of infravision. I'm willing
> to say that Lumens/Watt, which maxes at about 680 Lumens/Watt
Lumens...ugh. OK, so a Lumen is 1/680 W/steradien at 550nm,
yes? And I assume, like the phon for hearing, that it is NOT
a consant value for all wavelenghts but is based on the
spectral response of the cones, yes? That is, blue things
would have to be emitting/reflecting more energy than yellow
things in order to have the same lumen value?
Sorry, I'm just used to standard Watts, etc. Well, OK,
I'm used to magnitudes as well, and *that* system is really
perverse!
> low scene illumination) will go down linearly as the wavelength
> increases, just from a sort of quantum yield argument, so
I'd agree, if the same cells used for visible detection
are those being used for IR detection. I've assumed that
infravision is due to a modified rod cell that ONLY detects
IR light and thus we can posit any kind of spectral response
function gives us the best result, assuming that those that
weren't optimal got Darwinized out.
> Generously saying that that whole 14% lies in the IR air
> window (which it obviously doesn't) and the the human is a
> perfect blackbody (probably 5X too high), we can figure the
> total "excess" light the PC is shedding. The (human) PC is
> about like a cylinder 40 cm in diameter and 2 m tall,
> conveniently emitting uniformly in the vertical plane, if we
> neglect the "end effect" of the cylinder. His/her surface
> area of 0.25 m^2 then emits 16 "excess" Watts, or 16*36=576
Well, I used an area 2*1.3m*0.6cm+2*1.3m*0.10cm (right rectangular
prism w/o ends model) which gives 1.82m^2 area and thus an
excess of (ratioing to your value) 116 W.
> Lumens, about 1/3 of a 100 W light bulb (say 35 W).
Well, a good fraction of the 100W light buld is also in
the IR, so comparing a value of Watts emitted to what we
percieve as light from a light bulb is not exactly fair.
If we use the comparisons between compact flourescent
bulbs to incandescent bulbs: 20W C.F. = 100W Inc. so
we can *roughly* estimate that 80W of that energy is
really IR.
> So you have an area which is filled with 240 W light bulbs
> (the background) and you are a lamp with the total effective
But we can posit background subtraction...
> > low scene illumination) will go down linearly as the wavelength
> > increases, just from a sort of quantum yield argument, so
> I'd agree, if the same cells used for visible detection
> are those being used for IR detection. I've assumed that
> infravision is due to a modified rod cell that ONLY detects
> IR light and thus we can posit any kind of spectral response
> function gives us the best result, assuming that those that
> weren't optimal got Darwinized out.
The wide range of common scenes makes it hard to find a
general optimal response though.
> > Generously saying that that whole 14% lies in the IR air
> > window (which it obviously doesn't) and the the human is a
> > perfect blackbody (probably 5X too high), we can figure the
> > total "excess" light the PC is shedding. The (human) PC is
> > about like a cylinder 40 cm in diameter and 2 m tall,
> > conveniently emitting uniformly in the vertical plane, if we
> > neglect the "end effect" of the cylinder. His/her surface
> > area of 0.25 m^2 then emits 16 "excess" Watts, or 16*36=576
> Well, I used an area 2*1.3m*0.6cm+2*1.3m*0.10cm (right rectangular
> prism w/o ends model) which gives 1.82m^2 area and thus an
> excess of (ratioing to your value) 116 W.
Yeah, I somehow shifted the decimal. It should have been 2.5
m^2, but still the maximum contrast is going to be less than
half of %14, and then that diluted by spreading out in space.
A realistic contrast from an ambient temperature object is only
going to be a few percent, and that's only enough contrast to
signal the possible presence of a vague blob.
> > Lumens, about 1/3 of a 100 W light bulb (say 35 W).
> Well, a good fraction of the 100W light buld is also in
> the IR, so comparing a value of Watts emitted to what we
> percieve as light from a light bulb is not exactly fair.
> If we use the comparisons between compact flourescent
> bulbs to incandescent bulbs: 20W C.F. = 100W Inc. so
> we can *roughly* estimate that 80W of that energy is
> really IR.
When you give the photometric properties of a source,
that indicates the relation to perceived brightness, so
the lumen value of a light bulb already takes its spectrum
into account. That's why developing a light bulb equivalent
is useful in looking at this - it tells you how to model
infravision perception with familiar visible analogs.
> > So you have an area which is filled with 240 W light bulbs
> > (the background) and you are a lamp with the total effective
> But we can posit background subtraction...
I don't know of any biological systems that do front-end
background subtraction, probably because it isn't usually
very flexible. Even with electronics, it's difficult to do
it effectively for a wide variety of scenes.
Ben B.
>Perhaps it's not very useful.
In a cave, for instance, the walls would have a temperature close to
that of the air, so you wouldn't be able to see the walls at
all.
Also, I believe there has been many disputes and discussions
about what infravision can and can't detect.
Perhaps most roleplayers are unfamiliar with simple physics?
I don't know... I believe I could GM for a character who had
infrared vision, but I'd need to do a little bit of research
and thinking first<
Why do people always assume that everyone who played 2E used the
OPTIONAL rule that used infravision as heat detecting vision? The
OFFICIAL rule said that it just works, you simply see in the dark,
period. No physics explanation, no debate.
Another change that didn't need change. They could have just left out
the optional physics-oriented rule and left it be.
--
Halaster Blackcloak
"Undermountain, the Realms' deadliest dungeon? I prefer to call it
home."
"Elminster? Bah! Neophyte!"
> When you give the photometric properties of a source,
> that indicates the relation to perceived brightness, so
> the lumen value of a light bulb already takes its spectrum
But, and correct me if I'm wrong - astro just does NOT
worry about lumens - the use of the lumen for IR vision
is presupposing a spectral response of standard cones
and not a specifically modified rod cell.
This is why I've always simply restricted my discussion
to the flux in W/m^2 and binned things as either IR or
VIS in a broadband sense.
I don't think I saved a copy of your original analysis
message, but if you've got it handy, or something even
more complete, I'd love to see it (again). Perhaps
with more simplified discussion of the definition of
the Lumen and assumptions on the IR vs. VIS detection.
> I don't know of any biological systems that do front-end
> background subtraction, probably because it isn't usually
> very flexible. Even with electronics, it's difficult to do
> it effectively for a wide variety of scenes.
It's done in astro IR all the time. You simply chop the
secondary mirror so in one scene you get object+background
and in the other (a few arcseconds away) is simply background.
The difference in levels of the AC signal is then the source
brightness. In camera modes we have to nod the 'scope
and stare at blank sky and do the subtraction in post-processing.
I could imagine a system where the eyes jitter at high frequency
(which ours do already) and then some kind of bio-circuits
that do median filtering and subtracting of all input.
Sort of techno-magic systems no doubt...
"David R. Klassen" wrote:
>
> Ben Buckner wrote:
> >
> > When you give the photometric properties of a source,
> > that indicates the relation to perceived brightness, so
> > the lumen value of a light bulb already takes its spectrum
> But, and correct me if I'm wrong - astro just does NOT
> worry about lumens - the use of the lumen for IR vision
> is presupposing a spectral response of standard cones
> and not a specifically modified rod cell.
True, photometric units are mostly useless in astronomy,
but they're indispensible for discussing perception.
The lumen doesn't presuppose anything about the type of
receptor involved - it presupposes that infravision
images are perceived by the brain the same way normal
images are. Essentially you have 4 visual modes as opposed
to the 3 in humans, photopic (bright), Purkinje
(intermediate), scotopic (low-light), and infravisual
(extremely low light/IR). They all have different scaling
factors between radiometric quantities, but a lumen still
looks like a lumen, because it's defined by how it's
perceived.
If I induce an image by electrically stimulating the
retina, and that image looks that same too you as
a real light-induced image in terms of brightness,
that fixes the photometric properties of the real
scene, regardless of the physical spectral content
of that scene. All real scenes that appear as bright
as this scene have the same photometric properties.
A low light scene with scotopic (rod) response can
appear as bright as a bright-light scene with
photopic (rod/cone), in which case they have the
same photometric properties.
> This is why I've always simply restricted my discussion
> to the flux in W/m^2 and binned things as either IR or
> VIS in a broadband sense.
>
> I don't think I saved a copy of your original analysis
> message, but if you've got it handy, or something even
> more complete, I'd love to see it (again). Perhaps
> with more simplified discussion of the definition of
> the Lumen and assumptions on the IR vs. VIS detection.
I think I can model it better now. I was trying to treat
the PC as a discrete source in a 2D geometrical model, but
I think it's easier to look at it in terms of flux density
and since the target (zombie) and the source (PC) are of
roughly equal size and conformation.
The PC's emits radiant/luminous flux density bounded by
14% over the background, so if you see another live human,
he looks at most 14% brighter than the background. This
in itself without much more analyis should show that the
contrast of a body merely reflecting that 14% can
contrast no more than 14% and will contrast less unless
the reflecting target is directly adjacent to the
viewer.
We assume that the background walls are farther away
than the target, have blackbody surface, and contact
an infinite heat reservoir with high conductivity, such
that the excess irradiance from the PC has a negligible
contribution to the background and the background
itself can be modeled as the inside of a spherical
blackbody with ambient temperature. I think it can
be shown that the background brightness seen by an
observer at the center of such a sphere will be
independent of the sphere's radius, so that we can
resize the sphere arbitrarily.
With this model of the background, if we set the
background sphere's radius equal to the distance
of the target . This contrast is quite easy to
model, if the target is sufficiently far from the
viewer that, that the target is in effect a
reflective patch on a blackbody surface (i.e. the
sphere is near flat over the extent of the target).
The reflective target patch receives a a fraction
of the source's emitted excess radiation approximately
equal to the fraction of solid angle which it
subtends. Since the source and target are roughly
equal in size and form, the target effectively then
effectively reflects the radiant/luminous flux which
the source emits scaled by the solid angle ratio,
which is about equal to the cross-sectional area of
the target divided by 4 Pi R^2, where R is the target
distance. Your body model would give a 0.78 m^2 cross
section for the PC and zombie facing one another, so
the 14% contrast will be scaled by 0.062 R^-2, where
R is in meters. Of course, for the target very
near the viewer, this relation flattens out to
an asymptote at 14%. My gut feeling from years of
scatterometry experiments is that most of the time
you end up seeing at most a few percent or less of
the brightness of a source reflected in most
circumstances, so that jibes with my intuition about
it. We can also go to the 2D end and floor neglecting
cylindrical model, which makes the scaling about like
target width divided by 2 Pi R, or 0.095 R^-1, but
you're still looking at no more than a few percent
contrast.
The conversion to photometric units requires an
assumption about IR response, and since the quantum-
yield argument leads to very bright scenes, the
response would probably be less than what that
argument yields. It's probably better to just stick
with contrast.
We might also note that the viewer's own IR emissions
in real life would blind the imaging system (eyeball)
- electronic thermal IR detectors operate at ambient
temperature or are actively cooled. You'd have to
have external retinas with highly IR reflective
backings (or refrigerated dewar eyeballs) to have
any hope of making this work. The interior of an
eyeball will dump hundreds of times as much 37 C
IR on the retina as you'll ever be able to squeeze
through the pupil's optical entendue.
> > I don't know of any biological systems that do front-end
> > background subtraction, probably because it isn't usually
> > very flexible. Even with electronics, it's difficult to do
> > it effectively for a wide variety of scenes.
> It's done in astro IR all the time. You simply chop the
Yeah, but astronomy isn't a "wide variety of scenes." :-)
It's easy to process out background when you have extensive
prior knowledge about the scene.
> secondary mirror so in one scene you get object+background
> and in the other (a few arcseconds away) is simply background.
> The difference in levels of the AC signal is then the source
> brightness. In camera modes we have to nod the 'scope
> and stare at blank sky and do the subtraction in post-processing.
> I could imagine a system where the eyes jitter at high frequency
> (which ours do already) and then some kind of bio-circuits
> that do median filtering and subtracting of all input.
The chief problem with background subtraction is that it
does nothing about noise, and can even increase noise if
you're not very careful about how you do it. Noise is
usually proportional to total signal strength, so when you
have a small signal on a large background, you can take away
the mean background but you still have the big noise signal
left over, and that's what really confounds detection.
Background subtraction is far more useful for improving
the visual details of a known foreground image than it is
for detecting an unknown target. With this "blinking"
method, there's nothing to prevent you from inadvertently
using the zombie as the background, or using some random
fluctuation as the background, or using a shiny stalactite
as the background, etc. Yes, you could integrate, but then
you have a vision system with slow temporal response, which
is very bad for the things that infravision needs to do.
Ben B.