Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

# Avoiding Obstacles to reach a destination

777 views

### Avi Pilosof

Mar 31, 1996, 3:00:00 AM3/31/96
to
I'm studying AI, and we just got our final assignment.

One of the problems within it is to navigate a robot such that it
picks up objects and arranges them in a given pattern, avoiding
obstacles in its way. The destination point is known in advance.

This problem is not a gaming problem for me, but it certainly
seems to belong in this group, since I can easily see it applying
in a game situation.

I have alot of time before this is due, and I want to get it working
using a good method.

Meanwhile, we have had two methods hinted at (to avoid obstacles):

a) Brute force: If you hit a wall, turn a little, and keep
going. This will eventually work, but is less than
inspiring.
b) Find a good path, THEN follow it.

I don't want to use (a) unless I really have to, since it's very
el-cheapo.

I would much rather find a good way to do (b), but I'm not sure how

I have briefly considered 2 methods:

i) Ray casting - backwards. I'm not sure exactly how I
would do this, but I was thinking to start from the
destination, and try casting towards the robot. Upon
failure, try to offset the angle of casting a little,
until you've gone full circle (increment both +ive
and -ive with each cycle). This gives me the first
ray, but I then have to cast more rays (recursively)
until the robot is hit. But once the initial ray finds
a "way out" around the obstacles, where does the next
ray begin, etc?

ii)Graph algorithms. I was thinking to define a graph
of randomly (??) distributed vertices around the
destination, and where there is no obstacle from one
vertex to another, an edge exists. This would lend
itself to a shortest-route algo. I'm VERY vague about
this method, since it doesn't seem very concrete.

Anyway, my request is for hints and pointers and algorithm
names.

I'm **NOT** asking for a solution, I would just like some direction,
and perhaps a reference. Are my ideas worthwhile? Are they junk? etc...

I would really appreciate any response (especially via e-mail).

TIA.

###################################################################

Avi Pilosof - av...@cse.unsw.edu.au
- http://www.cse.unsw.edu.au/~avip/

> You cannot propel yourself forward by patting yourself on the back.

### Edward Montgomery

Apr 2, 1996, 3:00:00 AM4/2/96
to av...@cse.unsw.edu.au
>
>One of the problems within it is to navigate a robot such that it
>picks up objects and arranges them in a given pattern, avoiding
>obstacles in its way. The destination point is known in advance.
>

>Meanwhile, we have had two methods hinted at (to avoid obstacles):
>
> a) Brute force:

> b) Find a good path, THEN follow it.

b)

>
>I don't want to use (a) unless I really have to, since it's very
>el-cheapo.

agreed.

>
>I would much rather find a good way to do (b), but I'm not sure how
>
>I have briefly considered 2 methods:
>

> ii)Graph algorithms. I was thinking to define a graph
> of randomly (??) distributed vertices around the
> destination, and where there is no obstacle from one
> vertex to another, an edge exists. This would lend
> itself to a shortest-route algo. I'm VERY vague about
> this method, since it doesn't seem very concrete.

Then try a grid of squares (coordinate space, 2D I presume?). Easy to code,
handles infinite better.

>
>Anyway, my request is for hints and pointers and algorithm
>names.

Build a grid or a graph, depending on the coordinate space appropriate to your
objects, obstacles and robot. Add obstacles to the graph/grid as they are
encountered. Use a breadth-first search from the robot until the destination
is encountered. Rerun it after each obstacle is added.

If you want to optimize, save the graph/grid info between obstacle encounters,
instead of regenerating it. But on small grids, the search is very fast.

### Bjorn Reese

Apr 3, 1996, 3:00:00 AM4/3/96
to
Avi Pilosof (av...@cse.unsw.edu.au) wrote:

> One of the problems within it is to navigate a robot such that it
> picks up objects and arranges them in a given pattern, avoiding
> obstacles in its way. The destination point is known in advance.

You could use Potential Field as suggested by Jean-Claude Latombe
("Robot Motion Planning", ISBN 0-7923-9129-2)

Make your goal an attractive force, and obstacles to be avoided
repulsive forces. Imagine a marble rolling down towards the goal.
All obstacles are hills which the marble will roll around. This
approach has been applied to commercial robot controllers. The
disadvantage of this method is that it easily gets trapped in
local minima.

> This problem is not a gaming problem for me, but it certainly
> seems to belong in this group, since I can easily see it applying
> in a game situation.

Indeed. I'm experimenting with the above mentioned method, and
it seems to do very well for arcade type of games - very smooth
motions.

--

"It's getting late in the game to show any pride or shame" - Marillion

### Tim Hardy

Apr 4, 1996, 3:00:00 AM4/4/96
to
I don't have the original post, but it sounds like you need a shortest
path algorithm (a tried and true algorithm for doing exactly what you
want). We desperately need to put this in a faq. Read the thread I
started "Best Path Algorithm" if you want to know about A* and see
some C++ source code.

game and building up a nice collection of C++ code to put in the faq.

Tim

### John Christian Lonningdal

Apr 5, 1996, 3:00:00 AM4/5/96
to
av...@cse.unsw.edu.au (Avi Pilosof) wrote:

>I'm studying AI, and we just got our final assignment.

>One of the problems within it is to navigate a robot such that it

>picks up objects and arranges them in a given pattern, avoiding
>obstacles in its way. The destination point is known in advance.

>This problem is not a gaming problem for me, but it certainly

>seems to belong in this group, since I can easily see it applying
>in a game situation.

Try some shortest path algorithms. Take a look at my web page for some
solutions, including a description of the A* algorithm.

Regards,
John

------------------------------------------------------------------------------
John Christian Lonningdal - Web: http://www.lis.pitt.edu/~john
jo...@lis.pitt.edu - (412) 521-9386 - 6611 Wilkins Ave, Pittsburgh, PA 15217
------------------------------------------------------------------------------
Get JOBE 1.5 a sprite editor @ http://www.lis.pitt.edu/~john/jobe/jobe.htm
Try out FUSECUTTER! @ http://www.lis.pitt.edu/~john/fusecut/fusecut.htm

### Avi Pilosof

Apr 5, 1996, 3:00:00 AM4/5/96
to

: You could use Potential Field as suggested by Jean-Claude Latombe

: ("Robot Motion Planning", ISBN 0-7923-9129-2)

: Make your goal an attractive force, and obstacles to be avoided
: repulsive forces. Imagine a marble rolling down towards the goal.
: All obstacles are hills which the marble will roll around. This
: approach has been applied to commercial robot controllers. The
: disadvantage of this method is that it easily gets trapped in
: local minima.

This sounds like what one uses in flocking algorithms: At each "frame",
the destination contributes a vector, and the obstacle(s) contributes a
vector.

However, as you say, I'm not guaranteed to reach my destination. If
there's an obstacle lying between me and my target, such that it is
perpendicular to the line created between me and my target, I will
eventually come to a stop as the forces balance...

Is this correct?

### ss...@intranet.ca

Apr 6, 1996, 3:00:00 AM4/6/96
to
To: Tim_...@nt.com
Subject: Re: Avoiding Obstacles to

I'll read the FAQ on FAQs and take it over seeing as I'm the only one ('side
from you) who cares. Now to tell Steve Furlong that he no longer has to care.
:-)
--
Sunir Shah (ss...@intranet.ca) http://intranet.ca/~sshah/
BBS: The Open Fire BBS +1(613)584-1606 Fidonet: 1:241/11
The Programmers' Booklist : http://intranet.ca/~sshah/booklist.html
Synapsis Entertainment : http://intranet.ca/~sshah/synapsis.html
WASTE (Wargame AI Contest): http://intranet.ca/~sshah/waste/waste.html
___ Blue Wave/QWK v2.12

>> Slipstream Jet - The QWK solution for Usenets #UNREGISTERED

### Steven Woodcock

Apr 6, 1996, 3:00:00 AM4/6/96
to

: Make your goal an attractive force, and obstacles to be avoided
: repulsive forces. Imagine a marble rolling down towards the goal.
: All obstacles are hills which the marble will roll around. This
: approach has been applied to commercial robot controllers. The
: disadvantage of this method is that it easily gets trapped in
: local minima.

Aces of the Deep used this very same method for handling their
destroyer/convoy/submarine AI. It works like a charm.

Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Information Systems Group <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: wood...@escmail.orl.mmc.com (Work), swoo...@cris.com (Home) |
| Web: http://www.cris.com/~swoodcoc/wyrdhaven.html (Top level page) |
| http://www.cris.com/~swoodcoc/ai.html (Game AI page) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| the Lockheed Martin Information Systems Group |
+=============================================================================+

### Dean Mills

Apr 7, 1996, 4:00:00 AM4/7/96
to
:>: The disadvantage of this method is that it easily gets trapped in
:>: local minima.

I found, in my implementation using the above technique, it was very easy to
trap an AI player, such as...

T

XXXXX
X X
X X
X

A

Using a slightly modified A* search, the AI player would end up in the U,
trapped for all intents and purposes. Anyone have any ideas on this, as I
really don't want to redo this approach, as the rest of it works well! :)

D.Mills

______________________________________________________________
Dean Mills - Savage SoftWare Company
Team OS/2 Member

E-Mail - dmi...@mi.net - data...@ftcnet.com
Savage SoftWare, a division of Savage Online Services Inc.
______________________________________________________________

### Jasper Phillips

Apr 8, 1996, 3:00:00 AM4/8/96
to

Just modify your maps so that all blocking terrain is convex. ;-)
Seriously though, I'm guessing that this situation while comman and
stiffling, isn't the usual one, and that this kind of "springy" AI
normally works fairly well.

If this is the case, couldn't you just put in a special case if the AI
get stuck? So if your AI can't go where it wants, you can switch to some
other AI for a while. The problem naturally is when to switch back to
the springy AI, but I'd guess you could start with something simple like
returning as soon as you had direct line of sight.

Another option might be to combine this with some sort of pre computed
"best path routes" between general areas, and let that kick in if you really
got stuck.

Just some ideas,
-Jasper

--
/\ Jasper Phillips (Pit Fiend) ______,....----,
/VVVVVVVVVVVVVV|==================="""""""""""" ___,..-'
`^^^^^^^^^^^^^^|======================----------""""""
\/ http://www.cs.orst.edu/~philljas/

### Stephane Bessette

Apr 8, 1996, 3:00:00 AM4/8/96
to
In article <4k9k6v\$a...@scratchy.mi.net>, dmi...@mi.net (Dean Mills) wrote:
>:>: The disadvantage of this method is that it easily gets trapped in
>:>: local minima.
>
>I found, in my implementation using the above technique, it was very easy to
>trap an AI player, such as...
>
> T
>
> XXXXX
> X X
> X X
> X
>
> A
>
>Using a slightly modified A* search, the AI player would end up in the U,
>trapped for all intents and purposes. Anyone have any ideas on this, as I
>really don't want to redo this approach, as the rest of it works well! :)

If you have a way of determining when the AI player is trapped (can't
find novel movements, consumes too much time, ...), then you could switch
the AI governing the computer player. For example, in the situation
presented above, the AI/strategy of always turning left or always turning
right would easily solve the problem.

Now that I think about it, it seems as though I am suggesting that you
implement a superAI, in the sense that this AI would select the most
appropriate AI to use when faced with a certain environment. In the
current scenario, the superAI would utilise your AI until the computer
player is trapped. When this situation arises, the superAI would now
utilise the TurnLeft/TurnRight AI until either the destination is reached
or the computer player is once again moving in the direction of the goal.

--

Stephane [TEAM OS/2]

### Steve Hodsdon

Apr 9, 1996, 3:00:00 AM4/9/96
to
In article <4k9k6v\$a...@scratchy.mi.net>, From dmi...@mi.net (Dean
Mills), the following was written:

> Using a slightly modified A* search, the AI player would end up in the

> U, trapped for all intents and purposes. Anyone have any ideas on
> this, as

Simple. Remove the 'U' from the map.

Instead of exerting more effort in making the movement algorithm smarter
you just make the map simpler.

Steve

### Tim Thomas

Apr 9, 1996, 3:00:00 AM4/9/96
to
-Newsreader: NN version 6.5.0 #22 (NOV)

In <4k9k6v\$a...@scratchy.mi.net> dmi...@mi.net (Dean Mills) writes:
>:>: The disadvantage of this method is that it easily gets trapped in
>:>: local minima.
>I found, in my implementation using the above technique, it was very easy to
>trap an AI player, such as...
> T
> XXXXX
> X X
> X X
> X
> A

>Using a slightly modified A* search, the AI player would end up in the U,

>trapped for all intents and purposes. Anyone have any ideas on this, as I
>really don't want to redo this approach, as the rest of it works well! :)

How about using a hierarchical method for determining the path?
Organize the terrain into something similar to a quad tree. Take your pick
how you construct it. The weighting of each "quad" is the average of
all those within. (Storing min and max might be useful as well...)

Now this is how the algorithm would work. At the highest level of granularity
pick the path to your destination which requires the least amount of "effort".
Now enter the first "quad" that is on the path. Bump down to the next level
of granularity, recursively applying the same algorithm. The trick here is,
if you encounter a position where you can't go anywhere, bump back up to the
previous level of granularity and select a path which requires a little more
effort.

Such an algorithm, or one similar to it, would at the very least allow you to
"back out" of corners like the one you described above. Add a little bit of
lookahead before moving and you're set.

Tim Thomas
tho...@urbana.mcd.mot.com

### Bjorn Reese

Apr 10, 1996, 3:00:00 AM4/10/96
to
Dean Mills wrote:
>
> :>: The disadvantage of this method is that it easily gets trapped in
> :>: local minima.
>
> I found, in my implementation using the above technique, it was very easy to
> trap an AI player, such as...

You basicly have two options. Either avoid the local minima, or escape
it trapped.

The former is the preferred option, but it can be somewhat tricky.
One possibility is to use "beacons" -- well-placed attractive
points. Let the object travel from one beacon to the next until its
destination is within sight.

If you choose the latter option, you could try something like this:

1) Detect a dead-end (if the object doesn't reach its destination
within a specified time, or if it stays too long in a small
area)
2) Pick a random location which is reachable from the current
location, and go there first.
3) When you reach the random location, go to the desired location.

There are plenty of other methods for both options though.

### Robert A. Uhl

Apr 10, 1996, 3:00:00 AM4/10/96
to
Steve Hodsdon <10336...@compuserve.com> spake thusly:

>
>Simple. Remove the 'U' from the map.
>
>Instead of exerting more effort in making the movement algorithm smarter
>you just make the map simpler.

You cannot do this if writing an AI for a game like Bolo. Players
just olug in AI brains and use their own maps. You have no control
(nor should you, really).

That said, at one time I gave a lot of thought to various movement
algorithms. I came up with one whose one requirement is that LOS must
be cheap to calculate. I'll post it if anyone's interested.

--
Robert Uhl | Save the Ales! Homebrewer Since 1995
Orthodox Christian | If you like the Macintosh, send mail to
Macintosh Programmer | evang...@macway.com for information about the
Bolo Player (Spectre) | EvangeList, Guy Kawasaki's list for Mac advocates.

### Darren Reid

Apr 10, 1996, 3:00:00 AM4/10/96
to
Jasper Phillips wrote:
>
> In article <4k9k6v\$a...@scratchy.mi.net>, Dean Mills <dmi...@mi.net> wrote:
> >Using a slightly modified A* search, the AI player would end up in the U,
> >trapped for all intents and purposes. Anyone have any ideas on this, as I
> >really don't want to redo this approach, as the rest of it works well! :)

> Just modify your maps so that all blocking terrain is convex. ;-)

Chris Crawford once spent a long time on just this exact problem...and finally realized
that the best solution was removing concave areas from the map. A* should fix this
problem, so that slight modification must be breaking it.

> Seriously though, I'm guessing that this situation while comman and
> stiffling, isn't the usual one, and that this kind of "springy" AI
> normally works fairly well.
>
> If this is the case, couldn't you just put in a special case if the AI
> get stuck? So if your AI can't go where it wants, you can switch to some
> other AI for a while. The problem naturally is when to switch back to
> the springy AI, but I'd guess you could start with something simple like
> returning as soon as you had direct line of sight.

Yup. If your attractor/repulsor AI is breaking A*, just detect for stuck or "dancing"
units and drop back to a real A* or Djykstra's (sp?) for a few turns.

>
> Another option might be to combine this with some sort of pre computed
> "best path routes" between general areas, and let that kick in if you really
> got stuck.

This adds an extra benefit to the AI...pre-planned tactics ala pincer moves etc.

I would use attractor/repulsor to determine targets, then use Djykstra's to compute a
true shortest path that avoids heavy repulsor (defended) areas and follow that path. If
the target is moving, you can re-evaluate or choose to follow the attack path for a few
turns anyway...

### Jasper Phillips

Apr 10, 1996, 3:00:00 AM4/10/96
to
In article <4kdh4f\$c...@dub-news-svc-5.compuserve.com>,

Steve Hodsdon <10336...@compuserve.com> wrote:
>In article <4k9k6v\$a...@scratchy.mi.net>, From dmi...@mi.net (Dean
>Mills), the following was written:
>> Using a slightly modified A* search, the AI player would end up in the
>
>> U, trapped for all intents and purposes. Anyone have any ideas on
>> this, as
>
>Simple. Remove the 'U' from the map.
>
>Instead of exerting more effort in making the movement algorithm smarter
>you just make the map simpler.
>
>Steve
>

Better yet, just remove all terrain...
This should do wonders in simplifying your AI. ;-)

### Steven Woodcock

Apr 11, 1996, 3:00:00 AM4/11/96
to
Robert A. Uhl (ru...@odin.cair.du.edu) opined thusly:
: You cannot do this if writing an AI for a game like Bolo. Players

: just olug in AI brains and use their own maps. You have no control
: (nor should you, really).

Well said Robert!

: That said, at one time I gave a lot of thought to various movement

: algorithms. I came up with one whose one requirement is that LOS must
: be cheap to calculate. I'll post it if anyone's interested.

Hey, I'd like to see it. I'm always looking for fast code that I
won't have to write myself...... ;)

Steven Woodcock
"The One and Only"

### Jasper Phillips

Apr 11, 1996, 3:00:00 AM4/11/96
to

>Dean Mills wrote:
>>
>> :>: The disadvantage of this method is that it easily gets trapped in
>> :>: local minima.
>>
>> I found, in my implementation using the above technique, it was very easy to
>> trap an AI player, such as...
>
>You basicly have two options. Either avoid the local minima, or escape
>it trapped.
>
>The former is the preferred option, but it can be somewhat tricky.
>One possibility is to use "beacons" -- well-placed attractive
>points. Let the object travel from one beacon to the next until its
>destination is within sight.

The main problem with this is that you have to setup beacons for all maps
by hand... Although most games have premade terrain anyway. You could also
just make concave areas more powerfull repulsors than other blocking terrain.
Ideally the same terrain could be attractive or repulsive in different
strengths for different types of units (e.g. ground as opposed to naval).

I'm wondering how effective writing code to try to figure out how to place
attractors and repulsors would be... Any ideas?

[second option snipped]

>
>--

-Jasper

### ss...@intranet.ca

Apr 11, 1996, 3:00:00 AM4/11/96
to
To: PhilljasUnknownx.engr.orst.edu

Subject: Re: Avoiding Obstacles to

Ph> Just modify your maps so that all blocking terrain is convex. ;-)

Seriously, that is a viable solution.

Ph> If this is the case, couldn't you just put in a special case if the AI
Ph> get stuck? So if your AI can't go where it wants, you can switch to

Question: How does it tell when it is stuck?

Ph> Another option might be to combine this with some sort of pre computed
Ph> "best path routes" between general areas, and let that kick in if you
Ph> really got stuck.

It'd get stuck because of those too if the map is modifiable.

--
Sunir Shah (ss...@intranet.ca) http://intranet.ca/~sshah/
BBS: The Open Fire BBS +1(613)584-1606 Fidonet: 1:241/11
The Programmers' Booklist : http://intranet.ca/~sshah/booklist.html
Synapsis Entertainment : http://intranet.ca/~sshah/synapsis.html
WASTE (Wargame AI Contest): http://intranet.ca/~sshah/waste/waste.html

*NEWSFLASH*

-GAMEDEV, The Game Development Echo, is now on the Zone 1 Backbone (Fido)
-Stay tuned for the release of the *new* comp.ai.games FAQ . . .

### Szu-Wen Huang

Apr 12, 1996, 3:00:00 AM4/12/96
to
ss...@intranet.ca wrote:
: To: PhilljasUnknownx.engr.orst.edu

: Subject: Re: Avoiding Obstacles to

: Ph> Just modify your maps so that all blocking terrain is convex. ;-)

: Seriously, that is a viable solution.

: Ph> If this is the case, couldn't you just put in a special case if the AI
: Ph> get stuck? So if your AI can't go where it wants, you can switch to

: Question: How does it tell when it is stuck?

Alternatively, why not just define your terrain as not having obstacles
your units can *never* cross? Replace those with "expensive" terrain
that take long to cross, but not impossible, and your convex/concave
problem goes away as well.

: Ph> Another option might be to combine this with some sort of pre computed

: Ph> "best path routes" between general areas, and let that kick in if you
: Ph> really got stuck.

: It'd get stuck because of those too if the map is modifiable.

How's WASTED coming along, Sunir? :) Oh, I'm sorry, was it WASTE instead?
:)

### Simon Slavin

Apr 12, 1996, 3:00:00 AM4/12/96
to
In article <4k9k6v\$a...@scratchy.mi.net>,
dmi...@mi.net (Dean Mills) wrote:

> :>: The disadvantage of this method is that it easily gets trapped in
> :>: local minima.
>
> I found, in my implementation using the above technique, it was very easy to
> trap an AI player, such as...
>

> T
>
> XXXXX
> X X
> X X
> X
>
> A

When the piece gets trapped, take a note of the cells occupied by the
obstacle. Follow the left-hand wall until no part of the obstacle lies
between the piece and the target, then continue. [Simple algorithm.]

If you've got the processing power and storage space free then precompute
the path before you start out on any journey. [Extra marks.]

Point to ponder: what governs whether you should follow the left-hand or
right-hand wall ? [You can take a *good* guess from the 'Simple' alg.]

Simon.
--
sla...@hearsay.demon.co.uk -- the poster | "Sometimes a .sig is just
formerly known as sla...@entergrp.demon.co.uk. | a .sig." -- Sigmund Freud.

### Richard Wesson

Apr 13, 1996, 3:00:00 AM4/13/96
to
In article <4jn4f2\$f...@mirv.unsw.edu.au>,

Avi Pilosof <av...@cse.unsw.edu.au> wrote:
>I'm studying AI, and we just got our final assignment.
>
>One of the problems within it is to navigate a robot such that it
>picks up objects and arranges them in a given pattern, avoiding
>obstacles in its way. The destination point is known in advance.
>
[...]

>
>I have alot of time before this is due, and I want to get it working
>using a good method.
>
>Meanwhile, we have had two methods hinted at (to avoid obstacles):
>
> a) Brute force: If you hit a wall, turn a little, and keep
> going. This will eventually work, but is less than
> inspiring.
> b) Find a good path, THEN follow it.
>
>I don't want to use (a) unless I really have to, since it's very
>el-cheapo.
>
>I would much rather find a good way to do (b), but I'm not sure how
>
[...]

>I'm **NOT** asking for a solution, I would just like some direction,
>and perhaps a reference. Are my ideas worthwhile? Are they junk? etc...
>
>I would really appreciate any response (especially via e-mail).
>
>TIA.
>

OK, here are two ideas.

Take a big square. If it is completely free, mark it as clear. If it
is completely blocked, mark it as blocked. If it is partially clear,
give this square 4 children (in effect dividing it into 2x2) and repeat
this procedure for all of the children.

At the end of this, you'll have a number of squares in this tree-structure,
all the leaves will be either impassable or clear.

Search recursively for a path from free square to free square.

Once you have such a path, proceed from the center of each free square
to the next.

This should be pretty fast. It will work especially well for a large
board that is mostly free space. Since you are stepping from the center
of one square to another, you will have a slightly jagged looking path.
Once you have the quad-tree in place it will work for going from anywhere
to anywhere.

Here's another idea, rather like a breadth-first search but using the board
itself to store things. It's a modified flood-fill.

-- WARNING -- THE FOLLOWING MAY BE CONSIDERED A 'SOLUTION'

I call it "Ant Races".

Have a list of ants, starting out at the start. Each ant knows how many
moves it took to get where it is. Each turn, each ant moves into a free
space and 'colors' that space with the current number of ticks elapsed. If
there is more than 1 free space the ant can get to, it spawns a child
(to be added to the list of ants) for each extra free space. Ants that
cannot get to a free space will die and be removed from the list of ants.

Once you arrive at the start, you can trace your path back by

This should be much faster than the typical breadth-first search, because
you don't have to check an open or a closed list -- just look at the
board. The algorithm is O(size of board) -- every square on the board
is looked at some constant factor of times, maybe 2.5 times or so.

It's simple to implement and will give you a very close to optimal
path. The drawback is that you have to repeat all that work for every
different start/destination, so it isn't the best if you have to do a bunch
of paths from/to different places. For that, I would suggest the quad-tree.

-- Richard Wesson
(wes...@cse.ogi.edu)

### Bjorn Reese

Apr 13, 1996, 3:00:00 AM4/13/96
to
Jasper Phillips wrote:

> The main problem with this is that you have to setup beacons for all maps
> by hand... Although most games have premade terrain anyway. You could also
> just make concave areas more powerfull repulsors than other blocking terrain.

This could be a problem if your target is within the concavity at some
stage.

> I'm wondering how effective writing code to try to figure out how to place
> attractors and repulsors would be... Any ideas?

I think selecting the "beacons" from the meeting points (vertices) of
a Voronoi diagram could be pretty effective.

NB: I've quoted the word "beacons" because it usually has a slightly
different meaning in robotics.

--

### Bjorn Reese

Apr 13, 1996, 3:00:00 AM4/13/96
to
ss...@intranet.ca wrote:

> Question: How does it tell when it is stuck?

The easy way out would be to use time. The object could be considered
stuck either if it hasn't reached its destination within a reasonable
time, or if it stays in the same area for too long.

### Dean Mills

Apr 13, 1996, 3:00:00 AM4/13/96
to
:>Subject: Re: Avoiding Obstacles to

:>
:> Ph> Just modify your maps so that all blocking terrain is convex. ;-)
:>
:>Seriously, that is a viable solution.

Actually, in this case Sunir, it is not.

:> Ph> If this is the case, couldn't you just put in a special case if the AI

:> Ph> get stuck? So if your AI can't go where it wants, you can switch to
:>

:>Question: How does it tell when it is stuck?

That was the point of my original post! :)

:>It'd get stuck because of those too if the map is modifiable.

It is, and thus, I don't have any real control over the obstacles.

D.Mills

### Dean Mills

Apr 13, 1996, 3:00:00 AM4/13/96
to
:>Alternatively, why not just define your terrain as not having obstacles

:>your units can *never* cross? Replace those with "expensive" terrain
:>that take long to cross, but not impossible, and your convex/concave
:>problem goes away as well.

Well, since the whole idea of the game is to be as realistic as possible, this
is not a viable solution. Ships going overland, tanks traversing mountain
ranges, etc, wouldn't look to hot in the game at all! :)

D.Mills

### ss...@intranet.ca

Apr 13, 1996, 3:00:00 AM4/13/96
to
To: Phil...@mb.engr.orst.edu

Subject: Re: Avoiding Obstacles to

Ph> Better yet, just remove all terrain...
Ph> This should do wonders in simplifying your AI. ;-)

And then give the computer a *really* big gun and the player a stick.

Guaranteed win every time. :-)

### Darren Reid

Apr 13, 1996, 3:00:00 AM4/13/96
to
Simon Slavin wrote:
> When the piece gets trapped, take a note of the cells occupied by the
> obstacle. Follow the left-hand wall until no part of the obstacle lies
> between the piece and the target, then continue. [Simple algorithm.]

Only works if the non-traversable terrain is only slightly concave. If the concavity
bends beyond a tangent to the line drawn to the target, you will start back towards the
target while still trapped within the cavity. Been there, done that ;)

Seriously, am I missing something here? DO you really want a path algorithm as stupid as
the C&C one? What's wrong with a true A* or Dijkstra's? Check out:
http://www.lis.pitt.edu/~john/shorpath.htm
for lots of good stuff.
There is a nice white-paper on shortest path algorithms in the 1996 CGDA proceedings, on
page 439.

An interesting point that I just realized: A* is faster than Dijkstra, but not always as
good...it uses heuristics, and tries to go towards the target and around obstacles. This
can lead to it finding a more direct but possibly more expensive (in movement points)
path than Dijkstra...and this side-effect might be actually useful to someone who is
trying to simulate "realistic" unit movement over "perfect" unit movement.

-Darren Reid

### Steven Woodcock

Apr 14, 1996, 3:00:00 AM4/14/96
to
Darren Reid (shok...@nbnet.nb.ca) opined thusly:
: Seriously, am I missing something here? DO you really want a path algorithm
: as stupid as the C&C one? What's wrong with a true A* or Dijkstra's? Check
: out: http://www.lis.pitt.edu/~john/shorpath.htm for lots of good stuff.

: There is a nice white-paper on shortest path algorithms in the 1996 CGDA
: proceedings, on page 439.

That's Bryan Stout's presentation; he lurks around here too sometimes.

Bryan did a great job of covering all the major pathing algorithms and
approaches, and had a little tool that demostrated each. You could watch
how (normal) Djkstra's "spirals" out looking for a solution vs. A*'s
"searchlight" method. It was very cool and easily one of the best
presentations at the CGDC.

: An interesting point that I just realized: A* is faster than Dijkstra, but

: not always as good...it uses heuristics, and tries to go towards the target
: and around obstacles. This can lead to it finding a more direct but
: possibly more expensive (in movement points) path than Dijkstra...and this
: side-effect might be actually useful to someone who is trying to simulate
: "realistic" unit movement over "perfect" unit movement.

Agreed, although Bryan did demonstrate a nasty concavity situation
which A* took a *lot* longer to solve the Dijkstra. Six of one, half dozen
of the other....

### ss...@intranet.ca

Apr 14, 1996, 3:00:00 AM4/14/96
to
To: Phil...@tx.engr.orst.edu

Subject: Re: Avoiding Obstacles to

[Beacon method]
Ph> The main problem with this is that you have to setup beacons for all
Ph> maps by hand... Although most games have premade terrain anyway. You

With large, complex maps that could be a lot of fun. Oh yeah. :)

Ph> blocking terrain. Ideally the same terrain could be attractive or
Ph> repulsive in different strengths for different types of units (e.g.
Ph> ground as opposed to naval).

The more complex it gets, the more evil it becomes. :)

Ph> I'm wondering how effective writing code to try to figure out how to
Ph> place attractors and repulsors would be... Any ideas?

Just one... you could leave this run over night. Have the unit start at a
random location and go to a random destination and trace its path in a data
element (well, it'd be best to use an array the same size as the map and add
one to wherever the unit goes). Then, after it gets there or some sort of
time elapses, you located the areas with the largest concentration of movement
and put a beacon there that is proportional to the concentration of movement.

As for determining where high concentrations are (keep in this is over an
area), I'm sure there is a simple way as colour quantinization techniques do
similar things in 3D.

P.S. Yeah, I know I've been pushing this add-one-to-the-map idea over the edge
lately. :)

### ss...@intranet.ca

Apr 14, 1996, 3:00:00 AM4/14/96
to
To: Hu...@mnsinc.com

Subject: Re: Avoiding Obstacles to

Hey, Steven! How ya been? Or do you prefer to go by your given name now??

Hu> : Question: How does it tell when it is stuck?
Hu> Alternatively, why not just define your terrain as not having
Hu> obstacles your units can *never* cross? Replace those with "expensive"
Hu> terrain that take long to cross, but not impossible, and your
Hu> convex/concave problem goes away as well.

True, but that might look ridiculous depending on the game. Suppose a game is
in a city ... how do you cross a solid wall? :)

But, that *would* work brilliantly in games that are set in the great
outdoors.

Hu> How's WASTED coming along, Sunir? :) Oh, I'm sorry, was it WASTE

WASTE is having troubles avoiding obstacles as well.

It's moving along, but all of a sudden, it slowed down a lot. Fortunately, a
lot was done before we hit the wall (so to speak <G>).

### ss...@intranet.ca

Apr 14, 1996, 3:00:00 AM4/14/96
to
To: Dmi...@mi.net

Subject: Re: Avoiding Obstacles to

:> Ph> Just modify your maps so that all blocking terrain is convex. ;-)

:>Seriously, that is a viable solution.

Dm> Actually, in this case Sunir, it is not.

Not in this *particular* case, no. In general, it may be. Sue me for looking
beyond this instance. L:) (Sunir with an axe in his head)

:>Question: How does it tell when it is stuck?
Dm> That was the point of my original post! :)

I know.

:>It'd get stuck because of those too if the map is modifiable.

Dm> It is, and thus, I don't have any real control over the obstacles.

The beacon method is also out.

### ss...@intranet.ca

Apr 14, 1996, 3:00:00 AM4/14/96
to

Subject: Re: Avoiding Obstacles to

> Question: How does it tell when it is stuck?
Br> The easy way out would be to use time. The object could be considered

It wouldn't be accurate. Suppose a complex, maze-like map where the unit has
to go back and forth. The time would have to be equivalent to the area *
velocity (kind of.. velocity is 1D, area is 2D.. but just screw that and use
the numbers).

Br> time, or if it stays in the same area for too long.

That's what I mean.. how do you figure that out cheaply?

### ss...@intranet.ca

Apr 14, 1996, 3:00:00 AM4/14/96
to
Subject: Beacons (was: Avoiding Obstacles to reach a destination)

Br> NB: I've quoted the word "beacons" because it usually has a slightly
Br> different meaning in robotics.

Are you referring to the method where each robot is a beacon for the others?
Then you go out until you get to the end of beacon range at which point the
next robot comes down the line.

Anyway, the reminded me of the last time (first time) I had heard about

You send out units until they are no longer in visible sight of another units.
If it is, keep it moving. Then send out other units.

In this way, the units should spread out over the map and therefore be able to
cover the most area for recon. It also makes them go out in an expanding
area. It also creates a border/front which you can use to detect incoming
enemies or push them back.

Just a thought.

### Andrew Searls

Apr 14, 1996, 3:00:00 AM4/14/96
to
Dean Mills wrote:
>
> :>: The disadvantage of this method is that it easily gets trapped in
> :>: local minima.
>
> I found, in my implementation using the above technique, it was very easy to
> trap an AI player, such as...
>
> T
>
> XXXXX
> X X
> X X
> X
>
> A
> [snip]

Someone earlier (perhaps a different thread) mentioned a wave propogation
approach. You could start at A, and pick all squares within, say 45 degrees
to line AT. Then take each of those squares and do the same. Some easy
optimisations of this would be: Keeping track of squares visited to
avoid duplication. Carefully adjusting the angle through which you sweep
your search. Picking lower cost squares over higher cost squares.

Keep in mind also that if your terrain has verying costs, that the shortest
route this way may not necessarily be the cheapest one. You might want to
pick all current points within a certain radius once one point reaches the
target, or something along those lines (pick the five closest points and
try a line of sight shot or something.
--
-Andy Searls

Late Enhancements: unrecognized requirements due to lack of proper analysis.

/********************************************
Are you surfing, or just treading bits?
********************************************/

http://gaia.ecs.csus.edu/~searlsa

"Proof of Murphy's Law:
Murphy's Law cannot be proven, yet is correct, as when you try to
prove Murphy's Law, you will see that the proof is incorrect.
This is obviously due to Murphy's Law, therefore
one can only conclude that Murphy's Law is correct."

"If you're looking for a 'Hello, world' OS,
DOS will fit the bill perfectly"

### Eric Dybsand

Apr 15, 1996, 3:00:00 AM4/15/96
to
In <4krbs4\$d...@tribune.concentric.net> Swoo...@cris.com (Steven

Woodcock) writes:
>
>
>Bryan did a great job of covering all the major pathing algorithms
>and approaches, and had a little tool that demostrated each. You
>could watch how (normal) Djkstra's "spirals" out looking for a
>solution vs. A*'s "searchlight" method. It was very cool and easily
>one of the best presentations at the CGDC.
>

Yes it was, perhaps, the best pathing presentation that has been
done at CGDC.

Did you get Bryan to upload his little pathing demo to your web page?

>: An interesting point that I just realized: A* is faster than
>: Dijkstra, but not always as good...it uses heuristics, and tries
>: to go towards the target and around obstacles. This can lead to it

>:
>: [snip]

>
>Agreed, although Bryan did demonstrate a nasty concavity situation
>which A* took a *lot* longer to solve the Dijkstra. Six of one,
>half dozen of the other....
>

Yes, that surprised me too. I've not seen a modified A* do that.

Regards,

Eric Dybsand
Glacier Edge Technology

### Eric Dybsand

Apr 15, 1996, 3:00:00 AM4/15/96
to
In <317052...@nbnet.nb.ca> Darren Reid <shok...@nbnet.nb.ca>
writes:
>
>
>An interesting point that I just realized: A* is faster than
>Dijkstra, but not always as good...it uses heuristics, and tries
>to go towards the target and around obstacles. This can lead to
>it finding a more direct but possibly more expensive (in movement
>points) path than Dijkstra...and this side-effect might be actually
>useful to someone who is trying to simulate "realistic" unit
>movement over "perfect" unit movement.
>
>-Darren Reid
>
>

This has been my experience too Darren, that with a modified A*,
the unit often found the kind of path I might have taken as a
driver (especially with an off road type vehicle) but not necessarily
the most efficient path.

I also found that pathfinding during real time, created some
interesting artifacts in the data as a longer path was calculated.

### Peter Schaefer

Apr 15, 1996, 3:00:00 AM4/15/96
to
I think having only convex obstacles is a good starting idea;
convex 'shadow obstacles' over those that aren't convex.
This means you will have to tag every map tile with an obstacle area
identifier.
<sigh>
All those ingenious Algorithms waiting to be written by you & me !
--
Peter Schaefer

Chant this Mantra 1024 times a day
to become a happy usenet user:

Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee

### Szu-Wen Huang

Apr 15, 1996, 3:00:00 AM4/15/96
to
ss...@intranet.ca wrote:

: Hey, Steven! How ya been? Or do you prefer to go by your given name now??

Quite fine... 'Steven' will be just fine ;)

: True, but that might look ridiculous depending on the game. Suppose a game is

: in a city ... how do you cross a solid wall? :)

A tank or APC can ram the wall, while infantry can climb it ;). In any
case, a simplistic "go left" might work for non-continuous obstacles.

: But, that *would* work brilliantly in games that are set in the great
: outdoors.

Except when (as somebody else pointed out) the scale is large and you have
marine and land units. Something of the scale of Empire would obviously
not work, because ships would climb on land ;).

: Hu> How's WASTED coming along, Sunir? :) Oh, I'm sorry, was it WASTE

: WASTE is having troubles avoiding obstacles as well.

: It's moving along, but all of a sudden, it slowed down a lot. Fortunately, a
: lot was done before we hit the wall (so to speak <G>).

Equip the unit with a terrain-blaster :).

### Bjorn Reese

Apr 15, 1996, 3:00:00 AM4/15/96
to
ss...@intranet.ca wrote:
>
> > Question: How does it tell when it is stuck?
> Br> The easy way out would be to use time. The object could be considered
>
> It wouldn't be accurate. Suppose a complex, maze-like map where the unit has

Which is why I said "the easy way out" ;) I didn't intend to make
some general solution applicable to every possible scenario.

> Br> time, or if it stays in the same area for too long.
>
> That's what I mean.. how do you figure that out cheaply?

I've stumbled upon a similar technique, where they use a local
spatial memory, which may be more suitable. Try

### Darren Reid

Apr 15, 1996, 3:00:00 AM4/15/96
to
ss...@intranet.ca wrote:

> Ph> I'm wondering how effective writing code to try to figure out how to
> Ph> place attractors and repulsors would be... Any ideas?
>
> Just one... you could leave this run over night. Have the unit start at a
> random location and go to a random destination and trace its path in a data
> element (well, it'd be best to use an array the same size as the map and add
> one to wherever the unit goes). Then, after it gets there or some sort of
> time elapses, you located the areas with the largest concentration of movement
> and put a beacon there that is proportional to the concentration of movement.
>
> As for determining where high concentrations are (keep in this is over an
> area), I'm sure there is a simple way as colour quantinization techniques do
> similar things in 3D.
>
> P.S. Yeah, I know I've been pushing this add-one-to-the-map idea over the edge
> lately. :)

Hmm...might not have to run over night. Imagine this:
The user makes a map (may even be random generated). When it is selected to be played,
your program initializes an equal size array to 0 and then charts shortest paths from
each edge to opposing edges (on a 64x64 map, 128 paths are calculated). It adds 1 to
each cell crossed each time. This could be calculated real quick on loading the map, and
would give you some good info on Hotspots (passes, roads, etc) and ambush points (a
hotspot with dead zones next to it would be an obvious place to look).
Hey! Sunir! This is sounding interesting. Anyone have any ideas on evaluating the data
generated by this system?
This could be performed recursively as well....calculate a second pass from the edges to
the biggest hotspots, to show where the enemy is liable to be while on his way to the
hotspots, and where to suspect ambushes, and where to lay them...of course, depending on
your game system/map design, edges might not be the best starting points for this second
pass. Other hotspots might be, or cities, or whatever.
This map could be used as a bias adjustment for burnt circles, too.
There must be a big flaw in this somewhere...it sounds too easy <g>.

-Darren Reid

### Darren Reid

Apr 15, 1996, 3:00:00 AM4/15/96
to
Steven Woodcock wrote:

> That's Bryan Stout's presentation; he lurks around here too sometimes.
>

> Bryan did a great job of covering all the major pathing algorithms and
> approaches, and had a little tool that demostrated each. You could watch
> how (normal) Djkstra's "spirals" out looking for a solution vs. A*'s
> "searchlight" method. It was very cool and easily one of the best
> presentations at the CGDC.

Cool! I didn't go to that seminar, as I had other things to do...Bryan was very visible
in a few of the roundtables that I attended, and introduced himself to me eventually
(guesse I was visible too <g>). I would love to look at that tool...I'll have to email
him and see if he'll share it.

### mari...@nwoasis.wa.com

Apr 15, 1996, 3:00:00 AM4/15/96
to
>Jasper Phillips wrote:
>
>> The main problem with this is that you have to setup beacons for all
>>maps by hand... Although most games have premade terrain anyway.

I've been catching part of this thread for the last few days, and it
seems similar to the project I am working on and currently trying to
figure out.

For my application, I am trying to plot a course on a globe, where I
know the position and destination points, but can only travel in water
and must navigate around all land masses. The globe is configured in a
geodesic pattern so that any point could be land or water (but that all
water would be somehow connected for travelling purposes). If I could
make a straight course, I know an easy way to figure the straightest
course, but if the land masses get very convoluted between my position
and destination, the time necessary to find a good course could go up
dramatically (especially if an exhaustive search was necessary).

I saw mention of the beacons to place, and thought of determining
something like major navigation lanes (where many courses tend to
follow)... and if I have a hard time finding a course to a destination,
I can try finding a course to part of the navigation lane, and finding
my destination from there. Does this sound reasonable? I don't see it
generally finding the best route to another place, but I guess I am
thinking of it as getting out of the convolution into a place where I
know I can plot an easier course...

I haven't thought of any other way as yet to get out of whatever place
without potentially needing an exhaustive search to find the only outlet
from a large "ocean" which spans numerous points of the globe.

Any ideas? Thanks...

Marianna

Mail sent with Portico for Excalibur