Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Terra Nova Demo

15 views
Skip to first unread message

Jerome Scheuring

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to
After running through the demo of Terra Nova
(http://www.lglass.com), I can't comment on the
quality of the squad support AI, because, for the
demo, there's only you in the squad.

As for the Pirates, they seem limited in certain
ways: in particular, they will remain in place for
quite some time after their last contact with an
enemy unit-- *any* enemy unit, including one of
your sensor drones-- rather than returning to
regular patrol routes, or standing guard positions.

This makes it possible to send one's drone out to
attract the attention of the Pirates away from
their bases, which may then be destroyed in peace.

Any comments on the Pirates' AI From those of you
who've purchased the game?

Actually, this begs the question of how game AI
handles feints, in general.

Is it feasible for a unit (or group of units) to
decide (without cheating) that a particular
maneuver is a feint, and either ignore it, or go
the other way?

In TN, it seems that cooperative maneuvering on the
part of the squad would hold the day on any
you-attacking-them scenario, especially in cases
where at least one of you can move faster than
them; in the case of the demo, your drone can move
faster than they can, and can fly.

I can't speak to them-attacking-you scenarios,
since, presumably, their targets are fairly
well-defined and nonmoving.

Richard Wesson

unread,
Apr 27, 1996, 3:00:00 AM4/27/96
to

In article <317FED...@sylvia.com>,
Jerome Scheuring <jsc...@sylvia.com> wrote:
-After running through the demo of Terra Nova
-(http://www.lglass.com), I can't comment on the
-quality of the squad support AI, because, for the
-demo, there's only you in the squad.
[...]
-
-Actually, this begs the question of how game AI
-handles feints, in general.
-
-Is it feasible for a unit (or group of units) to
-decide (without cheating) that a particular
-maneuver is a feint, and either ignore it, or go
-the other way?
-
-In TN, it seems that cooperative maneuvering on the
-part of the squad would hold the day on any
-you-attacking-them scenario, especially in cases
-where at least one of you can move faster than
-them; in the case of the demo, your drone can move
-faster than they can, and can fly.
-
-I can't speak to them-attacking-you scenarios,
-since, presumably, their targets are fairly
-well-defined and nonmoving.


I've often thought that many simpler wargames (C&C for example)
would be much improved if the AI just had a concept of
"commensurate force". That is, when attacking, aim for a 2:1
force advantage in that area. C&C didn't seem to do this; no
matter how good (or how bad) your defenses were, it would always
send the same size group of units. Or maybe management took it
out :-(

If the pirates responded to a single unit with just enough force
to take it out reliably, then you couldn't conduct a feint this
way as easily.

-- Rich Wesson
(wes...@cse.ogi.edu)

Steven Woodcock

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

Richard Wesson (wes...@church.cse.ogi.edu) opined thusly:
: I've often thought that many simpler wargames (C&C for example)

: would be much improved if the AI just had a concept of
: "commensurate force". That is, when attacking, aim for a 2:1
: force advantage in that area. C&C didn't seem to do this; no
: matter how good (or how bad) your defenses were, it would always
: send the same size group of units. Or maybe management took it
: out :-(

Allegedly the "prequel" C&C game will do exactly this, tailor its
response to the forces it sees and the size of the threat. There was
a brief article about it in the recent Strategy Plus, or you can
read about it over on my Game AI page (address below).


Steven

+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Information Real3D <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: wood...@escmail.orl.mmc.com (Work), swoo...@cris.com (Home) |
| Web: http://www.cris.com/~swoodcoc/wyrdhaven.html (Top level page) |
| http://www.cris.com/~swoodcoc/ai.html (Game AI page) |
| http://www.cris.com/~swoodcoc/software.html (AI Software page) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| the Lockheed Martin Information Real3D |
+=============================================================================+

Eric Dybsand

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

In <4m3vql$8...@tribune.concentric.net> Swoo...@cris.com (Steven

Woodcock) writes:
>
>Richard Wesson (wes...@church.cse.ogi.edu) opined thusly:
>: I've often thought that many simpler wargames (C&C for example)
>: would be much improved if the AI just had a concept of
>: "commensurate force". That is, when attacking, aim for a 2:1
>: force advantage in that area. C&C didn't seem to do this; no
>: matter how good (or how bad) your defenses were, it would always
>: send the same size group of units. Or maybe management took it
>: out :-(
>
> Allegedly the "prequel" C&C game will do exactly this, tailor its
>response to the forces it sees and the size of the threat. There was
>a brief article about it in the recent Strategy Plus, or you can
>read about it over on my Game AI page (address below).
>

This is one of the problems I'm facing in Enemy Nations, such that,
without cheating, how does an AI player determine the proper force
mixture for an assault, invasion or whatever type of attack on the
human player or another AI player?

In C&C, it appears that once the terrain is revealled, then there
is no "fog of war", and so the AI player seems to have reason to
know the force composition of the human player (Joe B. please
correct me if I'm wrong about this). Such a situation would make
it easier to calculate an acceptable mixture of unit types and
quantities to make for a good attack force.

However, if "fog of war" is maintained, then only those human player
units sighted by, and known by, the AI player, can be used to help
determine the force composition of an attack force.

My question is, outside of the obvious use of random selections, what
basis have others used to arrive at how many units and of what type are
to use to gather for an attack? (Game specifics withstanding)

Regards,

Eric Dybsand
Glacier Edge Technology
Glendale, Colorado, USA


Steven Woodcock

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

Eric Dybsand (ed...@ix.netcom.com) opined thusly:

: In C&C, it appears that once the terrain is revealled, then there


: is no "fog of war", and so the AI player seems to have reason to
: know the force composition of the human player (Joe B. please
: correct me if I'm wrong about this). Such a situation would make
: it easier to calculate an acceptable mixture of unit types and
: quantities to make for a good attack force.


I'm waiting to hear from Joe on this too, but that does appear to
be the way C&C works.

: However, if "fog of war" is maintained, then only those human player


: units sighted by, and known by, the AI player, can be used to help
: determine the force composition of an attack force.

: My question is, outside of the obvious use of random selections, what
: basis have others used to arrive at how many units and of what type are
: to use to gather for an attack? (Game specifics withstanding)


Some ideas of varying quality:

1.) You could use a database of pre-built attack forces of
varying size. When the AI determines it wants to send
out an attack force, it randomly picks one of these
configurations (perhaps based on whatever knowledge it
has of the defenses), gathers and/or builds the units,
and sends them out. The problem with this approach is
you're not dynamically designing the forces, just
selecting menu options, and you have to have some
AI to handle cases where you just *can't* build a given
unit type for the selected force.

2.) You could simply sum up the known defensive/offensive
capabilities of the area/units you plan to attack and
gather enough attack strength to meet some criteria
(2:1, 3:1, whatever). The problem with *that* is that
not all attack points are created equally; a flamethrower
in C&C, for example, is worth more than 3 machine gunners.

3.) You could analyze the force you're going up against (again,
based on your best info), find the best "anti" units to use
against each one, and send two of each. This is cheap but
probably pretty effective.


That's it off the top of my head.

Joe Bostic

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

Swoo...@cris.com (Steven Woodcock) wrote:
> Some ideas of varying quality:
>
> 1.) You could use a database of pre-built attack forces of
> varying size. When the AI determines it wants to send
> out an attack force, it randomly picks one of these
> configurations (perhaps based on whatever knowledge it
> has of the defenses), gathers and/or builds the units,
> and sends them out. The problem with this approach is
> you're not dynamically designing the forces, just
> selecting menu options, and you have to have some
> AI to handle cases where you just *can't* build a given
> unit type for the selected force.
>
> 2.) You could simply sum up the known defensive/offensive
> capabilities of the area/units you plan to attack and
> gather enough attack strength to meet some criteria
> (2:1, 3:1, whatever). The problem with *that* is that
> not all attack points are created equally; a flamethrower
> in C&C, for example, is worth more than 3 machine gunners.
>
> 3.) You could analyze the force you're going up against (again,
> based on your best info), find the best "anti" units to use
> against each one, and send two of each. This is cheap but
> probably pretty effective.
> That's it off the top of my head.
>Steven

These are all good ideas. Red Alert uses all of these strategies (to
one degree or another). The first strategy gives the scenario
designers more control (they pick the particular groups rather than
letting the computer do so). The second and third strategies are more
suitable for end-game and multi player (with AI players) battles.
However, there are other factors to consider.

Distance to enemy force:
The greater the distance the more options the computer has. That
is, it can (with more safety) delay building combatants in order to
build infrastructure.

Mobility of enemy force:
The computer must also know how the player is able to travel. With
the threat of airborne assault, the defense posture and force
composition is greatly affected. The computer must make a 'best guess'
with regard to this threat by examining the player's assets and past
behavior.

Pattern of enemy attacks:
The computer remembers the patterns of attack the player uses.
Human players have a fortunate (for the computer) tendency to use the
same strategy over and over. A defense tailored to meet a particular
attack strategy can be very effective.

Expert System 'AI':
The computer has the advantage of leveraging all the most
successful skills and strategies discovered during testing. These give
the computer a 'bag of tricks' to use when the situation warrants.

Memory of past success and failure:
The computer must also remember the success and failure rates of
the various strategies at its disposal. Some players are better suited
to defend against particular strategies than others. It is also
presumed that all players will become better at the game, and so the
computer must be able to compensate for successful defense strategies.

Megahertz:
Finally, the computer is just plain faster at analyzing numbers
than a human is. This is the basic theory behind chess programs --
they aren't necessary smarter than the grand masters, they just look
more moves ahead. The computer can assess threats with much better
speed and accuracy than the human player. Combine this with its
lightning fast response time (and the above strategies), and the
computer becomes a formidable opponent.

Joe B.


Eric Dybsand

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

In <3186ca9c....@news.accessnv.com> joe...@anv.net (Joe Bostic)
writes:
>
>Swoo...@cris.com (Steven Woodcock) wrote:
>> Some ideas of varying quality:
>>

[Steve's interesting ideas snipped and saved off to disk]

>
>These are all good ideas. Red Alert uses all of these strategies (to
>one degree or another). The first strategy gives the scenario
>designers more control (they pick the particular groups rather than
>letting the computer do so). The second and third strategies are more
>suitable for end-game and multi player (with AI players) battles.
>However, there are other factors to consider.
>

[some of Joe's factors snipped and saved too]

>
>Pattern of enemy attacks:
> The computer remembers the patterns of attack the player uses.
>Human players have a fortunate (for the computer) tendency to use the
>same strategy over and over. A defense tailored to meet a particular
>attack strategy can be very effective.
>

This is another favorite area of interest of mine, and that being
what data values actually provide a representation of the "pattern
of attack the player uses"?

Over the years, I've tried several combinations which IMO, worked
to some degree, to a mediocre level at best, in accurately protraying
the actual "pattern" the player demonstrates.

In my attempts at attack pattern tracking, I've saved items such as:

Timing of attack - early in the game vs. later in the game
Direction of attack - compass directions only
Style of attack - single front vs. multiple fronts
Quantity of units - how many were used
Quality of units - what experience level was for units
Unit mix - what kind of units used by opfor and AI player
Attack Rating - number of AI units destroyed + objectives captured

My point is that the "patterns" I've generated and observed, were
incredibly subtle, if they existed at all. And those attacks with
the same high rating, were often completely different.

So, while I agree that such a goal (the computer remembers the
patterns) is desirable, I'm curious as to what you and others think
are the data elements that are critical for demonstrating the "actual
pattern of attack" as employed by a human player.


>
>Memory of past success and failure:
> The computer must also remember the success and failure rates of
>the various strategies at its disposal. Some players are better suited
>to defend against particular strategies than others. It is also
>presumed that all players will become better at the game, and so the
>computer must be able to compensate for successful defense strategies.
>

Once again, I agree with the goal, but I'm curious if the ratings
applied to the various strategies, are able to actually evaluate
whether the given strategy was succussful or a failure.

If one simply rates a strategy based on whether it "won" the game
or scenario, then that rating would be the same (in my mind) as
that applied to another, completely different strategy that also
was used to "win" the game. There needs to be some way to provide
a differenitation of the ratings, and that (in my mind) would need
an assortment of data history by which a given strategy could be
rated. It is the applicability of the selected data, that leaves
me wondering what makes for the most accurate rating of a strategy?

Jan Schrage

unread,
May 3, 1996, 3:00:00 AM5/3/96
to

On 1 May 1996 13:29:53 GMT, Eric Dybsand <ed...@ix.netcom.com> wrote:
[snip]

>
>In my attempts at attack pattern tracking, I've saved items such as:
>
>Timing of attack - early in the game vs. later in the game
>Direction of attack - compass directions only
>Style of attack - single front vs. multiple fronts
>Quantity of units - how many were used
>Quality of units - what experience level was for units
>Unit mix - what kind of units used by opfor and AI player
>Attack Rating - number of AI units destroyed + objectives captured
>
>My point is that the "patterns" I've generated and observed, were
>incredibly subtle, if they existed at all. And those attacks with
>the same high rating, were often completely different.
>
>So, while I agree that such a goal (the computer remembers the
>patterns) is desirable, I'm curious as to what you and others think
>are the data elements that are critical for demonstrating the "actual
>pattern of attack" as employed by a human player.
>
I think that depends very much on the game. One of my preferred attack
patterns is (in games that allow this sort of thing) keep the enemy at
with long range weapons, load effective short range weapons onto fast
trucks, move in and start the killoff (very effective against long range
weapons in my exp). Now this is extremely easy to spot (for humans), but
none of your criteria matches it and I frankly can't think of a method
for an AI to spot it, that is, simple criteria that do not require this
pattern to be known beforehand. You criteria (as I understand them) would
not spot it since it is always a small group of units on a larger front
and will probably disappear in the 'noise'.
I've heard that Dungeon Keeper employs some sort of 'behavioural
matching' which simulates player strategies in that
way, I can't remember the reference. There seem to be ways though.

[snip]

>
>Once again, I agree with the goal, but I'm curious if the ratings
>applied to the various strategies, are able to actually evaluate
>whether the given strategy was succussful or a failure.
>
Simple suggestion: Ratio of Units lost/enemy units destroyed, maybe
weighted by the power of the units. This is fairly easy to do.
I am not sure whether this is always appropriate (or at all).

Regards,

-jan
..... .. .. . .. .... ...... ...... .. . . ... .. ..... ... . ... . .... ..
Jan Schrage j.sc...@astro.cf.ac.uk, sch...@cardiff.ac.uk
http://www.unix-ag.uni-kl.de/~schrage/
PGP Public Key ID: finger ug...@carina.astro.cf.ac.uk

ss...@intranet.ca

unread,
May 5, 1996, 3:00:00 AM5/5/96
to

To: Joe...@anv.net
Subject: Re: Attacking in Strength

Jo> Distance to enemy force:
Jo> The greater the distance the more options the computer has. That

I'd use a gravity-type formula ... concern is directly
proportional to the inverse squared distance * enemy strength /
player strength. Why inverse distance squared? In theory, the
player is concerned just as much, so you get to multiply the
distances... or something.

Jo> The computer must also know how the player is able to travel. With

Perhaps consider the attack range too... if you 'remember'
position of unit x at t = 0, t = a and t = 2a (a is a number you
make up) and average those vectors you can determine vaguely how
long the unit will take to get to you. However, like Len
Maxwell's 'destination predictor' (from another thread in
comp.ai.games) this has major accuracy problems especially with
complex terrain.

So... you could measure the distances between the enemy and
player's closest unit for t and t = t-a and average that in with
the current velocity vector estimation .. if the distance is
coming towards you, try to extrapolate how long it will take to
get to you (distance / speed).. or even better (distance - attack
range / speed).. it doesn't have to be on top of you to attack
you.

Concern should increase geometrically/exponentionally as t
lessens (fear? well, it just make sense ... as it gets closer,
there's a bigger concern it'll kill you).

Jo> Pattern of enemy attacks:
Jo> Human players have a fortunate (for the computer) tendency to use the
Jo> same strategy over and over. A defense tailored to meet a particular

Player analysis would do it, non?

Jo> Expert System 'AI':
Jo> The computer has the advantage of leveraging all the most
Jo> successful skills and strategies discovered during testing. These give
Jo> the computer a 'bag of tricks' to use when the situation warrants.

This is a very good idea.

Jo> Memory of past success and failure:
Jo> The computer must also remember the success and failure rates of
Jo> the various strategies at its disposal. Some players are better suited

How about some reinforcement learning?

Have some values for each technique you can alter to alter the
'style' you use. For instance, if last time you used this
technique you attack from the left and you select the technique
again, you can approach from the right. Just a thought.

Or every time you use a strategy, lower it's chance of being
selected w.r.o the other techniques. This tries to limit
predictability.

Jo> Finally, the computer is just plain faster at analyzing numbers
[...]
Jo> more moves ahead. The computer can assess threats with much better
Jo> speed and accuracy than the human player. Combine this with its

Well, that's not necessarily true. What you're trying to say is
that computers are better tacticians. Strategy is where they get
blown to bits.

'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
' Sunir Shah (ss...@intranet.ca) '
' http://intranet.ca/~sshah ftp://ftp.intranet.ca/usr/synapsis '
' Fidonet: 1:241/11 BBS: The Open Fire BBS +1 (613) 584-1606 '
' '
' By the WEB: Vanity: http://intranet.ca/~sshah/ '
' The Programmers' Booklist booklist.html '
' ~`-,._.,-'~ Synapsis Entertainment synapsis.html '
' _.,-`~'-,._ WASTE (Warfare AI Contest) waste/waste.html '
' '
' comp.ai.games FAQ: ftp://ftp.intranet.ca/usr/synapsis/cagfaq?.txt '
' The Game Development Echo: Areafix GAMEDEV from Zone 1 (Fido) '
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
___ Blue Wave/QWK v2.12

>> Slipstream Jet - The QWK solution for Usenets #UNREGISTERED


ss...@intranet.ca

unread,
May 6, 1996, 3:00:00 AM5/6/96
to

Subject: Re: Attacking in Strength (bloody long)

To: Ed...@ix.netcom.com
Subject: Re: Attacking in Strength (bloody long)

I'm very annoyed. I was in the middle of my relatively long post
discussing player analysis and my sister (who came back yesterday
from uni) blew the bloody fuse because she didn't think that
eight light bulbs were enough ... noooooooo, she wanted a halogen
too. (the fact that she left her stereo on for three months
before I caught it shows how much she understands the concept of
electricity).

AHHHHHHHHHHHHh.....

And the CD was on a good song to boot.

Ofc, she couldn't get off her butt and fix the fuse.. Noo.. that
requires complex motor control.

Oh well, that should teach me... when chaos theory (read: my
sister) reigns supreme, save frequently.

I hate everyone.

[I'm going to be very curt now. Suffer :)]

I'm going to outline what I'd do for a game like Warcraft...

Ed> This is another favorite area of interest of mine, and that being
Ed> what data values actually provide a representation of the "pattern
Ed> of attack the player uses"?

Player analysis, IOW, which isn't necessarily limited to
conflict, but what the hell.

Ed> In my attempts at attack pattern tracking, I've saved items such as:
Ed> Timing of attack - early in the game vs. later in the game
Ed> Direction of attack - compass directions only
Ed> Style of attack - single front vs. multiple fronts
Ed> Quantity of units - how many were used
Ed> Quality of units - what experience level was for units
Ed> Unit mix - what kind of units used by opfor and AI player
Ed> Attack Rating - number of AI units destroyed + objectives captured

You aren't measuring that relative to the computer's state. You
have one point in space (7-D space with the above criteria, but
space nonetheless), which is relative to everything. With two
points (computer's state), it becomes more obvious because you
have a point and a vector.

[Hands up who can tell I've been learning about vectors for the
last four months? :)]

I mean, it's all well and good to know what the player does, but
you need to know *why*.

Ed> patterns) is desirable, I'm curious as to what you and others think
Ed> are the data elements that are critical for demonstrating the "actual
Ed> pattern of attack" as employed by a human player.

Well, you'd need to measure the player relative to the computer, but not only
that (and y'all missed my page long thinking process getting to this... aww..)
but you need to keep in mind other landmarks and concerns in the scenario.

Here's a cheesy map (and now I have to redraw it):

t...............ttt.............tt . meadow (normal terrain)
.===.==.t.tt......t.ttt....tt...tt t tree
.===.==.t......MM...t..t.....t.... M Mine
....A.....t.t..MM.ttt.....t....... =, * Building
.==.==.==.tttt..t.......tt....MM..
.==.==.==....t.....t..ttt.....MM..
..................t.tttt.....ttt.t
...t..t....t.................ttt..
tt..tt.t..tt..ttttt........t.....t
..t.t...t..t...ttt..t..t..t.**.**.
.t..t...tt..t.tt....tttt.t..**.**.
.t...tt..t......t...ttt......B....
....tt...ttt.t........t...***.***.
....t.....ttt...t...t...t.***.***.
.tt...tt...ttt..ttttt..tt.....***.
.................................t

With that map, I can see several strategies for player (A):

- Direct attack .. head for B
- Flank - go around the outsides
- Ambush - sprinkle units around and wait for B
[This is impossible for the player as humans can only
concentrate on so many things at once. so..]

- Ambush II - Find somewhere the computer is likely to go through
or does go through often and stick troops there
- Fan - send troops out in as many paths as possible
- Attack the mine first
- etc.

None of which you are even predicting, Eric, I don't think.

And then there are tactics:

- Long-range units in back guarded by short-range in front (WC
classic)
- Run like cowards
- micro-flanking .. vs. global flanking, you just flank a little
bit within the local conflict
- Attack from the rear
- Chasing
- Run in, attack, run out
- etc.

I can't think of every possible strategy, you can't .. nobody
can. Everyone is different. So, what we need is a solution that
is independant of knowing various strategies and tactics.

Ok, once again, your criteria, Eric, are limited to the player's
attacking units' state. You can add on the victim units' state,
such as the following stats:

- Strength
- recent actions
- distance to player
- direction of motion
- etc.

But still, that means nothing because the conflict isn't an
isolated event. It's all related to things like the motion of
the bases, mines (other important landmarks), other conflicts,
movement of units (both sides) and other stuff (help me out on
this one, folks ... I'm not SuperGeneralGuy)

I was just thinking, the player is clueless at the beginning.
He generally experiments a bit. I suggest you do the same with
the computer. Place units around the map, esp. around important
landmarks.

Anyway, to determine preference to landmarks, I suggest what you
do is for each landmark in the list (which you'll premake for the
map, I s'pose.. which is fair, considering the player can do
this, esp. with save games), you compare its distance to the
conflict.. for the landmark that is closest to the conflict, add
one to its conflict counter.

BTW, because we're just using relative distances, you can leave
the distances squared... saves time instead of having to do a
square root.

Compare each *type* of landmark's (sum the various landmarks that are
of each type) count and assign guards based on that information.

Now, it's assumed there is some sort of goal. If it's something
as straightforward as destroying the computer's base, you can do
something like for each conflict, average the vectors from the
it to the base. Ofc, for flanking manouvres it'll look the same
as direct attacks... or, you can use a quantinization algorithm
on the vectors to find out the more popular directions.

Now, I wouldn't leave it at that because it isn't general enough
to use later on in the game ... You need to make that more
abstract. Measure the degrees away from the direct vector from
base A to base B for each cluster and use that later on.

You made a good point, Eric, about the time of attack... All I
can say is for the graph of attacks vs. time, quantinize it so
you know when the highest number of attacks occur so you can
defend yourself. You can also extrapolate that the player isn't
going to be ready during the low ebbs right after high peaks
(she's expended her resources).

If the goal was something different, such as eradicating all
units, you'll need other criteria.. hmm... there are only a few
strategies for this, mostly the same as for destroying the base.

That's just strategy (barely). Now you have tactics.

Direction of attack: for each attack, take the direction and
then take the difference between that direction and the direction
to the player base. Average those differences over time, which
should give you the expectation of attack direction.

Strength: Average the ratios of attack strength to victim
strength AND the magnitude of attack strength AND the magnitude
of victim strength. If the average ratio is not close to the
ratio between avg. attack and avg. victim strength, you can
assume that the player doesn't take into consideration how much
strength the computer has. In that case, you should group units
together with at least twice as much strength as the player's
attack magnitude. Elsewise, 'pulse' groups of units so there are
backups ready. Of course, the player will compensate for this,
and the algorithm will compensate again and again. Recalculate
this at the end of each scenario.

Unit mix: Another great idea. You can couple this with the
player units that you got to first. That way, you can prepare to
kill those quickly to get to the ones in back (such as long-range
units). On the other hand, you can also prepare to take out the
ones in back while you take out the ones in front. The latter
would simply mean ALWAYS sending out short-range and long-range
units together. So much for caring about unit mix

Style of attack: We can break this down into organization and
not organization :).

The organization is really accounted for in the unit mix, so
we'll skip that.

As for not-organization, all I can think of is running in,
attacking, running out vs fighting vs running away vs running
away after awhile.

Hmm.. measure the number of units on each side that are destroyed
in each conflict as a percentage. Measure how many times you see
the same unit in conflict compared to the strength ratios and its
health (strong, high-stamina, healthy units can last longer than
weak units). That way you can figure out bravery and
berserkiness of the player.

I just realized it'll be pretty hard to understand what groups of
units are... I suggest you use a fuzzy, recursive fill algorithm.
Pick an opposing unit (and a computer unit if you don't know your
group configurations) and recursively search out from there in
all directions. Keep going until you're something like three
tiles away without slamming into a unit of your own creed. [just
made that up now.. <G>]

And anything else you figure out in play testing!

[Rating strategies' success/failure]
Ed> Once again, I agree with the goal, but I'm curious if the ratings
Ed> applied to the various strategies, are able to actually evaluate
Ed> whether the given strategy was succussful or a failure.
[... Both strats win game = same rating?]
Ed> was used to "win" the game. There needs to be some way to provide
Ed> a differenitation of the ratings, and that (in my mind) would need

What you need is something like client evaluation, and if WASTE
had a worthwhile client evaluation article yet, I'd point you to
it. As it is, we threw out the old one because the correct
strategy was for both sides to retreat as fast as possible. :-)

What you need to do is assess what you consider good in the game
and what you consider bad. Then evaluate based on those
criteria. For example, killing the player quickly could be good,
or destroying all its units.. winning is a good thing too.
Having your units killed is bad. Losing quickly is bad too.
Losing slowly is probably much better than winning quickly, and
slightly worse (or better?) than winning slowly (entertainment
value). Etc.

Ed> me wondering what makes for the most accurate rating of a strategy?

IMHO, "depends."

:-)

This is like 257 'cording to QEdit.. yeesh.

Eric Dybsand

unread,
May 6, 1996, 3:00:00 AM5/6/96
to

In <4mk0go$6...@news.bellglobal.com> ss...@intranet.ca writes:
>
>Subject: Re: Attacking in Strength (bloody long)
>

[sister stuff snipped]

>I'm going to outline what I'd do for a game like Warcraft...
>
> Ed> This is another favorite area of interest of mine, and that being
> Ed> what data values actually provide a representation of the
"pattern
> Ed> of attack the player uses"?
>
>Player analysis, IOW, which isn't necessarily limited to
>conflict, but what the hell.
>

First, Sunir, thank you for your _ideas_! However, I'm really not so
interested in theory or "what I'd do" comments (although those type
of comments do make for interesting reading), but instead, I'd really
like to read about _practical implementations_ of what someone has
actually done in a game, to provide for determining the pattern of
an attack by another player.


> Ed> In my attempts at attack pattern tracking, I've saved items such
as:
> Ed> Timing of attack - early in the game vs. later in the game
> Ed> Direction of attack - compass directions only
> Ed> Style of attack - single front vs. multiple fronts
> Ed> Quantity of units - how many were used
> Ed> Quality of units - what experience level was for units
> Ed> Unit mix - what kind of units used by opfor and AI player
> Ed> Attack Rating - number of AI units destroyed + objectives
captured
>
>You aren't measuring that relative to the computer's state.

Perhaps I really am. All of the measurements taken in this approach
(of which I clearly posted that I felt was mediocre at best, and that
I felt this approach was not successful at establishing patterns that
were of much use) were relative to the state of the player initiating
the attack (regardless of whether that player was human or computer
controlled) as applied to the player which was the target of the
attack.

Perhaps, the brevity of my post did not provided sufficient detail for
readers to determine that.

Also, the above list is not meant to be all inclusive (hence the
use of the qualifying phrase of "such as" while introducing the list).
This list of data elements is meant only to be representative of the
many data elements that I have actually implemented and tested for
usage in attack pattern determination, in a completed game development
project.

Since it does not work as well as I would like, I'm very interested in
learning what others have *tried* and what their opinions are of the
success (or failure) of different types of data and techniques.


>None of which you are even predicting, Eric, I don't think.

That is one of the goals of determining the pattern, I would suspect.


[remaining ideas snipped only for brevity]

ss...@intranet.ca

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

To: Ed...@ix.netcom.com
Subject: Re: Attacking in Strength

Ed> First, Sunir, thank you for your _ideas_! However, I'm really not so

What did you expect? I'm not about to write a block of code for *your*
project. I'm not getting paid. :-)

Ed> like to read about _practical implementations_ of what someone has
Ed> actually done in a game, to provide for determining the pattern of
Ed> an attack by another player.

Most people have non-disclaimer clauses and *can't* talk about those things.
But one guy did talk about his radar technique.

[Addendum: I just read that your reply to him via Lynx]

> Ed> In my attempts at attack pattern tracking, I've saved items such as:

[...]


>You aren't measuring that relative to the computer's state.

Ed> Perhaps I really am. All of the measurements taken in this approach
Ed> (of which I clearly posted that I felt was mediocre at best, and that
Ed> I felt this approach was not successful at establishing patterns that
Ed> were of much use) were relative to the state of the player initiating
Ed> the attack (regardless of whether that player was human or computer
Ed> controlled) as applied to the player which was the target of the
Ed> attack.

The problem is that the player has motivations for attacking. It's not
berserk. Hence, you have to be able to construct a cause-effect chain or the
expert system (because that's what it is) is completely useless.

Ed> Also, the above list is not meant to be all inclusive (hence the

No, of course not, as you point out. However, you didn't mention a single
computer-player stat so I figured you might be just forgetting that. Besides,
it helps to hammer out the idea completely so we can apply to solution to more
than one application. I didn't mean to make it look like you never took into
account the computer opponent, but ya never know.

Ed> Since it does not work as well as I would like, I'm very interested in
Ed> learning what others have *tried* and what their opinions are of the
Ed> success (or failure) of different types of data and techniques.

As I said, there are conflict of interest problems with that. Besides, who
cares? It's obvious that this isn't a fully-hashed out problem in AI... a
little conversation didn't hurt anyone.

Eric Dybsand

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In <4ms1cj$p...@news.bellglobal.com> ss...@intranet.ca writes:
>
>To: Ed...@ix.netcom.com
>Subject: Re: Attacking in Strength
>
> Ed> First, Sunir, thank you for your _ideas_! However, I'm really
not so
>
>What did you expect? I'm not about to write a block of code for
*your*
>project. I'm not getting paid. :-)
>
> Ed> like to read about _practical implementations_ of what someone
has
> Ed> actually done in a game, to provide for determining the pattern
of
> Ed> an attack by another player.
>
>Most people have non-disclaimer clauses and *can't* talk about those
things.
>But one guy did talk about his radar technique.
>
>[Addendum: I just read that your reply to him via Lynx]
>
> > Ed> In my attempts at attack pattern tracking, I've saved items
such as:
>[...]

> >You aren't measuring that relative to the computer's state.
>
>' Sunir Shah (ss...@intranet.ca) '
>' http://intranet.ca/~sshah ftp://ftp.intranet.ca/usr/synapsis '
>' Fidonet: 1:241/11 BBS: The Open Fire BBS +1 (613) 584-1606 '
>' '
>' By the WEB: Vanity: http://intranet.ca/~sshah/ '
>' The Programmers' Booklist booklist.html '
>' ~`-,._.,-'~ Synapsis Entertainment synapsis.html '
>' _.,-`~'-,._ WASTE (Warfare AI Contest) waste/waste.html '
>' '
>' comp.ai.games FAQ: ftp://ftp.intranet.ca/usr/synapsis/cagfaq?.txt '
>' The Game Development Echo: Areafix GAMEDEV from Zone 1 (Fido) '
>'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
>
>___ Blue Wave/QWK v2.12
>
>>> Slipstream Jet - The QWK solution for Usenets #UNREGISTERED
>

Sunir,

I'm not going to be drawn into a pointless argument or a flame-war.

If I post a request for implementation results on AI considerations,
and you respond with theories, ideas and untested opinions, and then
you take offense because I pointed out my interest, on that request,
was only in tested alternatives, then so be it.

Please, I've never suggested for you (or any one else to) not to
discuss theories and ideas in this newsgroup, nor have I personally
attacked you in any manner. Also, I'm still reading comp.ai.games,
including your posts and every one else's that catch my eye, and in
the meantime, I have a game I've got to get ready to ship.

DrmWeaver2

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

anybody got the original post in this thread... somehow my aol reader
didn't pick it up....

Peter Schaefer

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

ss...@intranet.ca wrote:
>To: Joe...@anv.net

>Subject: Re: Attacking in Strength
>
>Or every time you use a strategy, lower it's chance of being
>selected w.r.o the other techniques. This tries to limit
>predictability.
>
That would work, but I feel it lacks precision ..
Well, if you tune it right ?

> Jo> Finally, the computer is just plain faster at analyzing numbers
>[...]
> Jo> more moves ahead. The computer can assess threats with much better
> Jo> speed and accuracy than the human player. Combine this with its
>
>Well, that's not necessarily true. What you're trying to say is
>that computers are better tacticians. Strategy is where they get
>blown to bits.

I don't think thats the point. In most war/strategy games I've played, the computer
player wasn't able to realize a large one(two) move profit, because it stuck to RULES.

What I miss is some sort of lookahead for the computers actions.
I also state here( contest it! ) that most of the people complaining about
weak AI don't have the patience to wait 10 min. for the computer to move.

>' ~`-,._.,-'~ Synapsis Entertainment synapsis.html '
>' _.,-`~'-,._ WASTE (Warfare AI Contest) waste/waste.html '

Is that a synapse ?
--
Peter Schaefer

"office": scha...@malaga.math.uni-augsburg.de
http://wwwhoppe.math.uni-augsburg.de/schaefer
"leisure": scha...@mathpool.uni-augsburg.de
http://wwwhoppe.math.uni-augsburg.de/schaefer/Willkommen.html

Chant this Mantra 1024 times a day to become a happy usenet user:

Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee


ss...@intranet.ca

unread,
May 11, 1996, 3:00:00 AM5/11/96
to

To: Drmwe...@aol.com

Subject: Re: Attacking in Strength

Dr> anybody got the original post in this thread... somehow my aol reader
Dr> didn't pick it up....

Read da FAQ, man. Section 1.4.

Ok.. then again, maybe you don't want to read through all the archived
messages. :)

Will Dwinnell

unread,
May 12, 1996, 3:00:00 AM5/12/96
to

">Or every time you use a strategy, lower it's chance of being

>selected w.r.o the other techniques. This tries to limit

>predictability.

>

That would work, but I feel it lacks precision ..

Well, if you tune it right ?"

But what if you used some machine learning

system that was fast enough to update its

strategy after each game without being a

bother? Such a system would evolve over

time and would at least reduce

predictability.

"What I miss is some sort of lookahead for the computers actions.

I also state here( contest it! ) that most of the people complaining a

weak AI don't have the patience to wait 10 min. for the computer to mo


I can't speak specifically about whichever

game you have been discussing, but any game

using a relatively simple AI (you mentioned

rule-based systems in your message but I

didn't quite that) would seem to be improvable

by going to a more sophisticated control structure.


--
Will Dwinnell
Commercial Intelligence Inc.

Will Dwinnell

unread,
May 12, 1996, 3:00:00 AM5/12/96
to

"What I miss is some sort of lookahead for the computers actions.

I also state here( contest it! ) that most of the people complaining a
weak AI don't have the patience to wait 10 min. for the computer to mo


Oh yeah, my point was that you can get

some pretty sophisticated control structures

which are fairly fast. If we are just swapping

one of these in as a replacement for a simple

rule-based system, then maintaining a similar

level of performance (strategy-wise) should not

being any substantial cost in speed. I contend

that most games progammers (both professional

and otherwise) are games programmers first and

A.I. programmers second and are simply ignorant

in large measure of what can be done with

current technologies. If they are using some

simple approach, I don't see why it can't be

replaced by an equal or better control system

without having players wait 10 minutes for the

computer to make its move.

ss...@intranet.ca

unread,
May 14, 1996, 3:00:00 AM5/14/96
to

To: Schaefer

Subject: Re: Attacking in Strength

>Or every time you use a strategy, lower it's chance of being


>selected w.r.o the other techniques. This tries to limit
>predictability.

Sc> That would work, but I feel it lacks precision ..

But it's fast and simple... Ofc, there are better ways.

Sc> Well, if you tune it right ?

I doubt it.. it's independant of most information required to make a decent
decision. All my method really does is prevent predictability a little.



>that computers are better tacticians. Strategy is where they get
>blown to bits.

Sc> I don't think thats the point. In most war/strategy games I've played,
Sc> the computer player wasn't able to realize a large one(two) move
Sc> profit, because it stuck to RULES.

True.

Have you read the deep blue pages? I wonder if they're still up. Interesting
reading.

Sc> What I miss is some sort of lookahead for the computers actions.

The only problem is the static evaluation function. That only allows one
hard-coded strategy. For example, Deep Blue had great tactics but it had a
predictable strategy. If you made a dynamic eval function, perhaps the
computer would be better?

Sc> I also state here( contest it! ) that most of the people complaining
Sc> about weak AI don't have the patience to wait 10 min. for the computer
Sc> to move.

:-) Games are like that.



>' ~`-,._.,-'~ Synapsis Entertainment synapsis.html '
>' _.,-`~'-,._ WASTE (Warfare AI Contest) waste/waste.html '

Sc> Is that a synapse ?

It's whatever you want it to be (to me, it's just a filler... it's supposed to
look like an 'X' <G>).

Real synapses look totally different.

+======+----} S {--------+======+
|Neuron|AXON} y {DENDRITE|Neuron|
+======+----} n {--------+======+

Neurotransmitters get sent from the axon to the dendrite.. they stimulate an
electrochemical reaction that effectively acts as a boolean TRUE. The
dendrite also destroys the xmitter with an enzyme.

There's other important stuff involved here but this is an AI ng, not a
neurology ng. :)

ss...@intranet.ca

unread,
May 14, 1996, 3:00:00 AM5/14/96
to

To: Ed...@ix.netcom.com
Subject: Re: Attacking in Strength

Ed> I'm not going to be drawn into a pointless argument or a flame-war.

Did you send this after I e-mailed you or before? I'd imagine before as my
news takes a while to get here.

Anyway ... uh, yeah. <insert rehash of everything I said in my last e-mail>

Ed> the meantime, I have a game I've got to get ready to ship.

G'luck.

ss...@intranet.ca

unread,
May 15, 1996, 3:00:00 AM5/15/96
to

To: 76743...@compuserve.com

Subject: Re: Attacking in Strength

">Or every time you use a strategy, lower it's chance of being
>selected w.r.o the other techniques. This tries to limit
>predictability.
76> That would work, but I feel it lacks precision ..
76> Well, if you tune it right ?"

You know what... I just realized that when I replied to Peter's msg, I was
dissing my own idea. Oh well.

76> But what if you used some machine learning
76> system that was fast enough to update its
76> strategy after each game without being a

It'd be a tad slower, but who cares if it's postgame?

Anyway... I think all you'd need is a decent evaluation function to adjust the
weights. Suddenly I'm getting a flashback ... this sounds a lot like the
nervous network I proposed. It's not very reactionary, though as there is no
input. So a learning nervous network in seizure?

Steven Woodcock

unread,
May 16, 1996, 3:00:00 AM5/16/96
to

Peter Schaefer (schaefer) opined thusly:

: I don't think thats the point. In most war/strategy games I've played, the
: computer player wasn't able to realize a large one(two) move profit,

> because it stuck to RULES.

I agree with this completely Peter. I feel we've got all the tools
we need to build an AI that can accurately and concisely order units
around. What we lack is the methodology for giving it *purpose* for those
moves and to actual *plan* the whole thing.

Eric's EN does some goal-setting, as do a couple of other games
I can think of to one degree or another. I know that Eric's AI in
Enemy Nations will also allow a plan to be disrupted or dropped due
to changing circumstances, which is a step in the right direction.

: What I miss is some sort of lookahead for the computers actions.
: I also state here( contest it! ) that most of the people complaining about
: weak AI don't have the patience to wait 10 min. for the computer to move.

I contest it. Back in the days of my Amiga 1000, I used to lvoe
the way the computer took time to "think" about its next move in
the Perfect General. Of course, that only lasted about a week; then
I realized it wasn't thinking at all (at least based on it tactics). ;)

I'll gladly wait 5 minutes in a strategic game if the AI makes it
worth my while. I don't begrudge the time to a human opponent; I won't
begrudge it to the AI if it plays well.

A tangentially related story: When I was working SDI ("Star Wars") a
couple of years ago, I built an end-to-end analytical sim named SWARM.
(Officially that stood for the Strategic Warfare Model, but in reality it
stood for Steven Woodcock's Armageddon Research Model. ;) I installed a
neural-network based missile attack characterization routine into the sim
that allowed it to recognize various types of missile attacks and coordinate
its defenses accordingly. Added nearly 20 seconds to every "minute" of
simulated type, and hence nearly increased the length of the run by
a third, but my defensive measures were improved by 20-odd percent
too. *That* caused some eyebrows to raise in some high places,
and was well worth the effort.

DrmWeaver2

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

cool.. ..

you wouldn't be able to talk inany detail about any of that would you???

purely academically interested as a former military intelligence "weennie"
who was consistently opposed when I attempted to introduce/use computer
modeling at lower levels of the Navy intelligence community.

DrmWeaver2

Disowned by some
An aggravation to others
Independent souls aren't worried
If someone else (anyone else) believes them
Or understands their arguments
The TRUTH must be told anyway

0 new messages