One of the problems within it is to navigate a robot such that it
picks up objects and arranges them in a given pattern, avoiding
obstacles in its way. The destination point is known in advance.
This problem is not a gaming problem for me, but it certainly
seems to belong in this group, since I can easily see it applying
in a game situation.
I have alot of time before this is due, and I want to get it working
using a good method.
Meanwhile, we have had two methods hinted at (to avoid obstacles):
a) Brute force: If you hit a wall, turn a little, and keep
going. This will eventually work, but is less than
inspiring.
b) Find a good path, THEN follow it.
I don't want to use (a) unless I really have to, since it's very
el-cheapo.
I would much rather find a good way to do (b), but I'm not sure how
to go about it.
I have briefly considered 2 methods:
i) Ray casting - backwards. I'm not sure exactly how I
would do this, but I was thinking to start from the
destination, and try casting towards the robot. Upon
failure, try to offset the angle of casting a little,
until you've gone full circle (increment both +ive
and -ive with each cycle). This gives me the first
ray, but I then have to cast more rays (recursively)
until the robot is hit. But once the initial ray finds
a "way out" around the obstacles, where does the next
ray begin, etc?
ii)Graph algorithms. I was thinking to define a graph
of randomly (??) distributed vertices around the
destination, and where there is no obstacle from one
vertex to another, an edge exists. This would lend
itself to a shortest-route algo. I'm VERY vague about
this method, since it doesn't seem very concrete.
Anyway, my request is for hints and pointers and algorithm
names.
I'm **NOT** asking for a solution, I would just like some direction,
and perhaps a reference. Are my ideas worthwhile? Are they junk? etc...
I would really appreciate any response (especially via e-mail).
TIA.
###################################################################
Avi Pilosof - av...@cse.unsw.edu.au
- http://www.cse.unsw.edu.au/~avip/
> You cannot propel yourself forward by patting yourself on the back.
>Meanwhile, we have had two methods hinted at (to avoid obstacles):
>
> a) Brute force:
> b) Find a good path, THEN follow it.
b)
>
>I don't want to use (a) unless I really have to, since it's very
>el-cheapo.
agreed.
>
>I would much rather find a good way to do (b), but I'm not sure how
>to go about it.
>
>I have briefly considered 2 methods:
>
> ii)Graph algorithms. I was thinking to define a graph
> of randomly (??) distributed vertices around the
> destination, and where there is no obstacle from one
> vertex to another, an edge exists. This would lend
> itself to a shortest-route algo. I'm VERY vague about
> this method, since it doesn't seem very concrete.
Then try a grid of squares (coordinate space, 2D I presume?). Easy to code,
handles infinite better.
>
>Anyway, my request is for hints and pointers and algorithm
>names.
Build a grid or a graph, depending on the coordinate space appropriate to your
objects, obstacles and robot. Add obstacles to the graph/grid as they are
encountered. Use a breadth-first search from the robot until the destination
is encountered. Rerun it after each obstacle is added.
If you want to optimize, save the graph/grid info between obstacle encounters,
instead of regenerating it. But on small grids, the search is very fast.
> One of the problems within it is to navigate a robot such that it
> picks up objects and arranges them in a given pattern, avoiding
> obstacles in its way. The destination point is known in advance.
You could use Potential Field as suggested by Jean-Claude Latombe
("Robot Motion Planning", ISBN 0-7923-9129-2)
Make your goal an attractive force, and obstacles to be avoided
repulsive forces. Imagine a marble rolling down towards the goal.
All obstacles are hills which the marble will roll around. This
approach has been applied to commercial robot controllers. The
disadvantage of this method is that it easily gets trapped in
local minima.
> This problem is not a gaming problem for me, but it certainly
> seems to belong in this group, since I can easily see it applying
> in a game situation.
Indeed. I'm experimenting with the above mentioned method, and
it seems to do very well for arcade type of games - very smooth
motions.
--
Bjorn Reese Email: bre...@imada.ou.dk
Odense University, Denmark URL: http://www.imada.ou.dk/~breese
"It's getting late in the game to show any pride or shame" - Marillion
Somebody please help out with the faq!!!! I can't. I'm writing a
game and building up a nice collection of C++ code to put in the faq.
Tim
>I'm studying AI, and we just got our final assignment.
>One of the problems within it is to navigate a robot such that it
>picks up objects and arranges them in a given pattern, avoiding
>obstacles in its way. The destination point is known in advance.
>This problem is not a gaming problem for me, but it certainly
>seems to belong in this group, since I can easily see it applying
>in a game situation.
Try some shortest path algorithms. Take a look at my web page for some
solutions, including a description of the A* algorithm.
http://www.lis.pitt.edu/~john/shorpath.htm
Regards,
John
------------------------------------------------------------------------------
John Christian Lonningdal - Web: http://www.lis.pitt.edu/~john
jo...@lis.pitt.edu - (412) 521-9386 - 6611 Wilkins Ave, Pittsburgh, PA 15217
------------------------------------------------------------------------------
Get JOBE 1.5 a sprite editor @ http://www.lis.pitt.edu/~john/jobe/jobe.htm
Try out FUSECUTTER! @ http://www.lis.pitt.edu/~john/fusecut/fusecut.htm
: You could use Potential Field as suggested by Jean-Claude Latombe
: ("Robot Motion Planning", ISBN 0-7923-9129-2)
: Make your goal an attractive force, and obstacles to be avoided
: repulsive forces. Imagine a marble rolling down towards the goal.
: All obstacles are hills which the marble will roll around. This
: approach has been applied to commercial robot controllers. The
: disadvantage of this method is that it easily gets trapped in
: local minima.
This sounds like what one uses in flocking algorithms: At each "frame",
the destination contributes a vector, and the obstacle(s) contributes a
vector.
However, as you say, I'm not guaranteed to reach my destination. If
there's an obstacle lying between me and my target, such that it is
perpendicular to the line created between me and my target, I will
eventually come to a stop as the forces balance...
Is this correct?
Ti> Somebody please help out with the faq!!!! I can't. I'm writing a
I'll read the FAQ on FAQs and take it over seeing as I'm the only one ('side
from you) who cares. Now to tell Steve Furlong that he no longer has to care.
:-)
--
Sunir Shah (ss...@intranet.ca) http://intranet.ca/~sshah/
BBS: The Open Fire BBS +1(613)584-1606 Fidonet: 1:241/11
The Programmers' Booklist : http://intranet.ca/~sshah/booklist.html
Synapsis Entertainment : http://intranet.ca/~sshah/synapsis.html
WASTE (Wargame AI Contest): http://intranet.ca/~sshah/waste/waste.html
___ Blue Wave/QWK v2.12
>> Slipstream Jet - The QWK solution for Usenets #UNREGISTERED
: Make your goal an attractive force, and obstacles to be avoided
: repulsive forces. Imagine a marble rolling down towards the goal.
: All obstacles are hills which the marble will roll around. This
: approach has been applied to commercial robot controllers. The
: disadvantage of this method is that it easily gets trapped in
: local minima.
Aces of the Deep used this very same method for handling their
destroyer/convoy/submarine AI. It works like a charm.
Steve
+=============================================================================+
| _ |
| Steven Woodcock _____C .._. |
| Senior Software Engineer, Gameware ____/ \___/ |
| Lockheed Martin Information Systems Group <____/\_---\_\ "Ferretman" |
| Phone: 719-597-5413 |
| E-mail: wood...@escmail.orl.mmc.com (Work), swoo...@cris.com (Home) |
| Web: http://www.cris.com/~swoodcoc/wyrdhaven.html (Top level page) |
| http://www.cris.com/~swoodcoc/ai.html (Game AI page) |
| Disclaimer: My opinions in NO way reflect the opinions of |
| the Lockheed Martin Information Systems Group |
+=============================================================================+
I found, in my implementation using the above technique, it was very easy to
trap an AI player, such as...
T
XXXXX
X X
X X
X
A
Using a slightly modified A* search, the AI player would end up in the U,
trapped for all intents and purposes. Anyone have any ideas on this, as I
really don't want to redo this approach, as the rest of it works well! :)
D.Mills
______________________________________________________________
Dean Mills - Savage SoftWare Company
Team OS/2 Member
E-Mail - dmi...@mi.net - data...@ftcnet.com
Savage SoftWare, a division of Savage Online Services Inc.
______________________________________________________________
Just modify your maps so that all blocking terrain is convex. ;-)
Seriously though, I'm guessing that this situation while comman and
stiffling, isn't the usual one, and that this kind of "springy" AI
normally works fairly well.
If this is the case, couldn't you just put in a special case if the AI
get stuck? So if your AI can't go where it wants, you can switch to some
other AI for a while. The problem naturally is when to switch back to
the springy AI, but I'd guess you could start with something simple like
returning as soon as you had direct line of sight.
Another option might be to combine this with some sort of pre computed
"best path routes" between general areas, and let that kick in if you really
got stuck.
Just some ideas,
-Jasper
--
/\ Jasper Phillips (Pit Fiend) ______,....----,
/VVVVVVVVVVVVVV|==================="""""""""""" ___,..-'
`^^^^^^^^^^^^^^|======================----------""""""
\/ http://www.cs.orst.edu/~philljas/
If you have a way of determining when the AI player is trapped (can't
find novel movements, consumes too much time, ...), then you could switch
the AI governing the computer player. For example, in the situation
presented above, the AI/strategy of always turning left or always turning
right would easily solve the problem.
Now that I think about it, it seems as though I am suggesting that you
implement a superAI, in the sense that this AI would select the most
appropriate AI to use when faced with a certain environment. In the
current scenario, the superAI would utilise your AI until the computer
player is trapped. When this situation arises, the superAI would now
utilise the TurnLeft/TurnRight AI until either the destination is reached
or the computer player is once again moving in the direction of the goal.
--
Stephane [TEAM OS/2]
Simple. Remove the 'U' from the map.
Instead of exerting more effort in making the movement algorithm smarter
you just make the map simpler.
Steve
In <4k9k6v$a...@scratchy.mi.net> dmi...@mi.net (Dean Mills) writes:
>:>: The disadvantage of this method is that it easily gets trapped in
>:>: local minima.
>I found, in my implementation using the above technique, it was very easy to
>trap an AI player, such as...
> T
> XXXXX
> X X
> X X
> X
> A
>Using a slightly modified A* search, the AI player would end up in the U,
>trapped for all intents and purposes. Anyone have any ideas on this, as I
>really don't want to redo this approach, as the rest of it works well! :)
How about using a hierarchical method for determining the path?
Organize the terrain into something similar to a quad tree. Take your pick
how you construct it. The weighting of each "quad" is the average of
all those within. (Storing min and max might be useful as well...)
Now this is how the algorithm would work. At the highest level of granularity
pick the path to your destination which requires the least amount of "effort".
Now enter the first "quad" that is on the path. Bump down to the next level
of granularity, recursively applying the same algorithm. The trick here is,
if you encounter a position where you can't go anywhere, bump back up to the
previous level of granularity and select a path which requires a little more
effort.
Such an algorithm, or one similar to it, would at the very least allow you to
"back out" of corners like the one you described above. Add a little bit of
lookahead before moving and you're set.
Tim Thomas
tho...@urbana.mcd.mot.com
You basicly have two options. Either avoid the local minima, or escape
it trapped.
The former is the preferred option, but it can be somewhat tricky.
One possibility is to use "beacons" -- well-placed attractive
points. Let the object travel from one beacon to the next until its
destination is within sight.
If you choose the latter option, you could try something like this:
1) Detect a dead-end (if the object doesn't reach its destination
within a specified time, or if it stays too long in a small
area)
2) Pick a random location which is reachable from the current
location, and go there first.
3) When you reach the random location, go to the desired location.
There are plenty of other methods for both options though.
You cannot do this if writing an AI for a game like Bolo. Players
just olug in AI brains and use their own maps. You have no control
(nor should you, really).
That said, at one time I gave a lot of thought to various movement
algorithms. I came up with one whose one requirement is that LOS must
be cheap to calculate. I'll post it if anyone's interested.
--
Robert Uhl | Save the Ales! Homebrewer Since 1995
Orthodox Christian | If you like the Macintosh, send mail to
Macintosh Programmer | evang...@macway.com for information about the
Bolo Player (Spectre) | EvangeList, Guy Kawasaki's list for Mac advocates.
> Just modify your maps so that all blocking terrain is convex. ;-)
Chris Crawford once spent a long time on just this exact problem...and finally realized
that the best solution was removing concave areas from the map. A* should fix this
problem, so that slight modification must be breaking it.
> Seriously though, I'm guessing that this situation while comman and
> stiffling, isn't the usual one, and that this kind of "springy" AI
> normally works fairly well.
>
> If this is the case, couldn't you just put in a special case if the AI
> get stuck? So if your AI can't go where it wants, you can switch to some
> other AI for a while. The problem naturally is when to switch back to
> the springy AI, but I'd guess you could start with something simple like
> returning as soon as you had direct line of sight.
Yup. If your attractor/repulsor AI is breaking A*, just detect for stuck or "dancing"
units and drop back to a real A* or Djykstra's (sp?) for a few turns.
>
> Another option might be to combine this with some sort of pre computed
> "best path routes" between general areas, and let that kick in if you really
> got stuck.
This adds an extra benefit to the AI...pre-planned tactics ala pincer moves etc.
I would use attractor/repulsor to determine targets, then use Djykstra's to compute a
true shortest path that avoids heavy repulsor (defended) areas and follow that path. If
the target is moving, you can re-evaluate or choose to follow the attack path for a few
turns anyway...
Better yet, just remove all terrain...
This should do wonders in simplifying your AI. ;-)
Well said Robert!
: That said, at one time I gave a lot of thought to various movement
: algorithms. I came up with one whose one requirement is that LOS must
: be cheap to calculate. I'll post it if anyone's interested.
Hey, I'd like to see it. I'm always looking for fast code that I
won't have to write myself...... ;)
Steven Woodcock
"The One and Only"
The main problem with this is that you have to setup beacons for all maps
by hand... Although most games have premade terrain anyway. You could also
just make concave areas more powerfull repulsors than other blocking terrain.
Ideally the same terrain could be attractive or repulsive in different
strengths for different types of units (e.g. ground as opposed to naval).
I'm wondering how effective writing code to try to figure out how to place
attractors and repulsors would be... Any ideas?
[second option snipped]
>
>--
>Bjorn Reese Email: bre...@imada.ou.dk
-Jasper
Ph> Just modify your maps so that all blocking terrain is convex. ;-)
Seriously, that is a viable solution.
Ph> If this is the case, couldn't you just put in a special case if the AI
Ph> get stuck? So if your AI can't go where it wants, you can switch to
Question: How does it tell when it is stuck?
Ph> Another option might be to combine this with some sort of pre computed
Ph> "best path routes" between general areas, and let that kick in if you
Ph> really got stuck.
It'd get stuck because of those too if the map is modifiable.
--
Sunir Shah (ss...@intranet.ca) http://intranet.ca/~sshah/
BBS: The Open Fire BBS +1(613)584-1606 Fidonet: 1:241/11
The Programmers' Booklist : http://intranet.ca/~sshah/booklist.html
Synapsis Entertainment : http://intranet.ca/~sshah/synapsis.html
WASTE (Wargame AI Contest): http://intranet.ca/~sshah/waste/waste.html
*NEWSFLASH*
-GAMEDEV, The Game Development Echo, is now on the Zone 1 Backbone (Fido)
-Stay tuned for the release of the *new* comp.ai.games FAQ . . .
: Ph> Just modify your maps so that all blocking terrain is convex. ;-)
: Seriously, that is a viable solution.
: Ph> If this is the case, couldn't you just put in a special case if the AI
: Ph> get stuck? So if your AI can't go where it wants, you can switch to
: Question: How does it tell when it is stuck?
Alternatively, why not just define your terrain as not having obstacles
your units can *never* cross? Replace those with "expensive" terrain
that take long to cross, but not impossible, and your convex/concave
problem goes away as well.
: Ph> Another option might be to combine this with some sort of pre computed
: Ph> "best path routes" between general areas, and let that kick in if you
: Ph> really got stuck.
: It'd get stuck because of those too if the map is modifiable.
How's WASTED coming along, Sunir? :) Oh, I'm sorry, was it WASTE instead?
:)
> :>: The disadvantage of this method is that it easily gets trapped in
> :>: local minima.
>
> I found, in my implementation using the above technique, it was very easy to
> trap an AI player, such as...
>
> T
>
> XXXXX
> X X
> X X
> X
>
> A
When the piece gets trapped, take a note of the cells occupied by the
obstacle. Follow the left-hand wall until no part of the obstacle lies
between the piece and the target, then continue. [Simple algorithm.]
If you've got the processing power and storage space free then precompute
the path before you start out on any journey. [Extra marks.]
Point to ponder: what governs whether you should follow the left-hand or
right-hand wall ? [You can take a *good* guess from the 'Simple' alg.]
Simon.
--
sla...@hearsay.demon.co.uk -- the poster | "Sometimes a .sig is just
formerly known as sla...@entergrp.demon.co.uk. | a .sig." -- Sigmund Freud.
OK, here are two ideas.
As someone already mentioned, a quad-tree.
Take a big square. If it is completely free, mark it as clear. If it
is completely blocked, mark it as blocked. If it is partially clear,
give this square 4 children (in effect dividing it into 2x2) and repeat
this procedure for all of the children.
At the end of this, you'll have a number of squares in this tree-structure,
all the leaves will be either impassable or clear.
Search recursively for a path from free square to free square.
Once you have such a path, proceed from the center of each free square
to the next.
This should be pretty fast. It will work especially well for a large
board that is mostly free space. Since you are stepping from the center
of one square to another, you will have a slightly jagged looking path.
Once you have the quad-tree in place it will work for going from anywhere
to anywhere.
Here's another idea, rather like a breadth-first search but using the board
itself to store things. It's a modified flood-fill.
-- WARNING -- THE FOLLOWING MAY BE CONSIDERED A 'SOLUTION'
I call it "Ant Races".
Have a list of ants, starting out at the start. Each ant knows how many
moves it took to get where it is. Each turn, each ant moves into a free
space and 'colors' that space with the current number of ticks elapsed. If
there is more than 1 free space the ant can get to, it spawns a child
(to be added to the list of ants) for each extra free space. Ants that
cannot get to a free space will die and be removed from the list of ants.
Start with one ant at your destination.
Once you arrive at the start, you can trace your path back by
always going down the gradient.
This should be much faster than the typical breadth-first search, because
you don't have to check an open or a closed list -- just look at the
board. The algorithm is O(size of board) -- every square on the board
is looked at some constant factor of times, maybe 2.5 times or so.
It's simple to implement and will give you a very close to optimal
path. The drawback is that you have to repeat all that work for every
different start/destination, so it isn't the best if you have to do a bunch
of paths from/to different places. For that, I would suggest the quad-tree.
-- Richard Wesson
(wes...@cse.ogi.edu)
> The main problem with this is that you have to setup beacons for all maps
> by hand... Although most games have premade terrain anyway. You could also
> just make concave areas more powerfull repulsors than other blocking terrain.
This could be a problem if your target is within the concavity at some
stage.
> I'm wondering how effective writing code to try to figure out how to place
> attractors and repulsors would be... Any ideas?
I think selecting the "beacons" from the meeting points (vertices) of
a Voronoi diagram could be pretty effective.
NB: I've quoted the word "beacons" because it usually has a slightly
different meaning in robotics.
--
Bjorn Reese Email: bre...@imada.ou.dk
> Question: How does it tell when it is stuck?
The easy way out would be to use time. The object could be considered
stuck either if it hasn't reached its destination within a reasonable
time, or if it stays in the same area for too long.
Actually, in this case Sunir, it is not.
:> Ph> If this is the case, couldn't you just put in a special case if the AI
:> Ph> get stuck? So if your AI can't go where it wants, you can switch to
:>
:>Question: How does it tell when it is stuck?
That was the point of my original post! :)
:>It'd get stuck because of those too if the map is modifiable.
It is, and thus, I don't have any real control over the obstacles.
D.Mills
Well, since the whole idea of the game is to be as realistic as possible, this
is not a viable solution. Ships going overland, tanks traversing mountain
ranges, etc, wouldn't look to hot in the game at all! :)
D.Mills
Ph> Better yet, just remove all terrain...
Ph> This should do wonders in simplifying your AI. ;-)
And then give the computer a *really* big gun and the player a stick.
Guaranteed win every time. :-)
Only works if the non-traversable terrain is only slightly concave. If the concavity
bends beyond a tangent to the line drawn to the target, you will start back towards the
target while still trapped within the cavity. Been there, done that ;)
Seriously, am I missing something here? DO you really want a path algorithm as stupid as
the C&C one? What's wrong with a true A* or Dijkstra's? Check out:
http://www.lis.pitt.edu/~john/shorpath.htm
for lots of good stuff.
There is a nice white-paper on shortest path algorithms in the 1996 CGDA proceedings, on
page 439.
An interesting point that I just realized: A* is faster than Dijkstra, but not always as
good...it uses heuristics, and tries to go towards the target and around obstacles. This
can lead to it finding a more direct but possibly more expensive (in movement points)
path than Dijkstra...and this side-effect might be actually useful to someone who is
trying to simulate "realistic" unit movement over "perfect" unit movement.
-Darren Reid
I'd just like to second Darren about this page; it's a good one.
: There is a nice white-paper on shortest path algorithms in the 1996 CGDA
: proceedings, on page 439.
That's Bryan Stout's presentation; he lurks around here too sometimes.
Bryan did a great job of covering all the major pathing algorithms and
approaches, and had a little tool that demostrated each. You could watch
how (normal) Djkstra's "spirals" out looking for a solution vs. A*'s
"searchlight" method. It was very cool and easily one of the best
presentations at the CGDC.
: An interesting point that I just realized: A* is faster than Dijkstra, but
: not always as good...it uses heuristics, and tries to go towards the target
: and around obstacles. This can lead to it finding a more direct but
: possibly more expensive (in movement points) path than Dijkstra...and this
: side-effect might be actually useful to someone who is trying to simulate
: "realistic" unit movement over "perfect" unit movement.
Agreed, although Bryan did demonstrate a nasty concavity situation
which A* took a *lot* longer to solve the Dijkstra. Six of one, half dozen
of the other....
[Beacon method]
Ph> The main problem with this is that you have to setup beacons for all
Ph> maps by hand... Although most games have premade terrain anyway. You
With large, complex maps that could be a lot of fun. Oh yeah. :)
Ph> blocking terrain. Ideally the same terrain could be attractive or
Ph> repulsive in different strengths for different types of units (e.g.
Ph> ground as opposed to naval).
The more complex it gets, the more evil it becomes. :)
Ph> I'm wondering how effective writing code to try to figure out how to
Ph> place attractors and repulsors would be... Any ideas?
Just one... you could leave this run over night. Have the unit start at a
random location and go to a random destination and trace its path in a data
element (well, it'd be best to use an array the same size as the map and add
one to wherever the unit goes). Then, after it gets there or some sort of
time elapses, you located the areas with the largest concentration of movement
and put a beacon there that is proportional to the concentration of movement.
As for determining where high concentrations are (keep in this is over an
area), I'm sure there is a simple way as colour quantinization techniques do
similar things in 3D.
P.S. Yeah, I know I've been pushing this add-one-to-the-map idea over the edge
lately. :)
Hey, Steven! How ya been? Or do you prefer to go by your given name now??
Hu> : Question: How does it tell when it is stuck?
Hu> Alternatively, why not just define your terrain as not having
Hu> obstacles your units can *never* cross? Replace those with "expensive"
Hu> terrain that take long to cross, but not impossible, and your
Hu> convex/concave problem goes away as well.
True, but that might look ridiculous depending on the game. Suppose a game is
in a city ... how do you cross a solid wall? :)
But, that *would* work brilliantly in games that are set in the great
outdoors.
Hu> How's WASTED coming along, Sunir? :) Oh, I'm sorry, was it WASTE
Hu> instead? :)
WASTE is having troubles avoiding obstacles as well.
It's moving along, but all of a sudden, it slowed down a lot. Fortunately, a
lot was done before we hit the wall (so to speak <G>).
:> Ph> Just modify your maps so that all blocking terrain is convex. ;-)
:>Seriously, that is a viable solution.
Dm> Actually, in this case Sunir, it is not.
Not in this *particular* case, no. In general, it may be. Sue me for looking
beyond this instance. L:) (Sunir with an axe in his head)
:>Question: How does it tell when it is stuck?
Dm> That was the point of my original post! :)
I know.
:>It'd get stuck because of those too if the map is modifiable.
Dm> It is, and thus, I don't have any real control over the obstacles.
The beacon method is also out.
> Question: How does it tell when it is stuck?
Br> The easy way out would be to use time. The object could be considered
It wouldn't be accurate. Suppose a complex, maze-like map where the unit has
to go back and forth. The time would have to be equivalent to the area *
velocity (kind of.. velocity is 1D, area is 2D.. but just screw that and use
the numbers).
Br> time, or if it stays in the same area for too long.
That's what I mean.. how do you figure that out cheaply?
Br> NB: I've quoted the word "beacons" because it usually has a slightly
Br> different meaning in robotics.
Are you referring to the method where each robot is a beacon for the others?
Then you go out until you get to the end of beacon range at which point the
next robot comes down the line.
Anyway, the reminded me of the last time (first time) I had heard about
beacons. I had an idea about the effects of that.
You send out units until they are no longer in visible sight of another units.
If it is, keep it moving. Then send out other units.
In this way, the units should spread out over the map and therefore be able to
cover the most area for recon. It also makes them go out in an expanding
area. It also creates a border/front which you can use to detect incoming
enemies or push them back.
Just a thought.
Someone earlier (perhaps a different thread) mentioned a wave propogation
approach. You could start at A, and pick all squares within, say 45 degrees
to line AT. Then take each of those squares and do the same. Some easy
optimisations of this would be: Keeping track of squares visited to
avoid duplication. Carefully adjusting the angle through which you sweep
your search. Picking lower cost squares over higher cost squares.
Keep in mind also that if your terrain has verying costs, that the shortest
route this way may not necessarily be the cheapest one. You might want to
pick all current points within a certain radius once one point reaches the
target, or something along those lines (pick the five closest points and
try a line of sight shot or something.
--
-Andy Searls
graduate, Borg Institute of Technology
Late Enhancements: unrecognized requirements due to lack of proper analysis.
/********************************************
Are you surfing, or just treading bits?
********************************************/
http://gaia.ecs.csus.edu/~searlsa
"Proof of Murphy's Law:
Murphy's Law cannot be proven, yet is correct, as when you try to
prove Murphy's Law, you will see that the proof is incorrect.
This is obviously due to Murphy's Law, therefore
one can only conclude that Murphy's Law is correct."
"If you're looking for a 'Hello, world' OS,
DOS will fit the bill perfectly"
Yes it was, perhaps, the best pathing presentation that has been
done at CGDC.
Did you get Bryan to upload his little pathing demo to your web page?
>: An interesting point that I just realized: A* is faster than
>: Dijkstra, but not always as good...it uses heuristics, and tries
>: to go towards the target and around obstacles. This can lead to it
>:
>: [snip]
>
>Agreed, although Bryan did demonstrate a nasty concavity situation
>which A* took a *lot* longer to solve the Dijkstra. Six of one,
>half dozen of the other....
>
Yes, that surprised me too. I've not seen a modified A* do that.
Regards,
Eric Dybsand
Glacier Edge Technology
Glendale, Colorado, USA
This has been my experience too Darren, that with a modified A*,
the unit often found the kind of path I might have taken as a
driver (especially with an off road type vehicle) but not necessarily
the most efficient path.
I also found that pathfinding during real time, created some
interesting artifacts in the data as a longer path was calculated.
"office":
scha...@malaga.math.uni-augsburg.de
http://wwwhoppe.math.uni-augsburg.de/schaefer
"leisure":
scha...@mathpool.uni-augsburg.de
http://www.mathpool.uni-augsburg.de/~schaefer
Chant this Mantra 1024 times a day
to become a happy usenet user:
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
Deedledee Deedledee Deedledee
: Hey, Steven! How ya been? Or do you prefer to go by your given name now??
Quite fine... 'Steven' will be just fine ;)
: True, but that might look ridiculous depending on the game. Suppose a game is
: in a city ... how do you cross a solid wall? :)
A tank or APC can ram the wall, while infantry can climb it ;). In any
case, a simplistic "go left" might work for non-continuous obstacles.
: But, that *would* work brilliantly in games that are set in the great
: outdoors.
Except when (as somebody else pointed out) the scale is large and you have
marine and land units. Something of the scale of Empire would obviously
not work, because ships would climb on land ;).
: Hu> How's WASTED coming along, Sunir? :) Oh, I'm sorry, was it WASTE
: Hu> instead? :)
: WASTE is having troubles avoiding obstacles as well.
: It's moving along, but all of a sudden, it slowed down a lot. Fortunately, a
: lot was done before we hit the wall (so to speak <G>).
Equip the unit with a terrain-blaster :).
Which is why I said "the easy way out" ;) I didn't intend to make
some general solution applicable to every possible scenario.
> Br> time, or if it stays in the same area for too long.
>
> That's what I mean.. how do you figure that out cheaply?
I've stumbled upon a similar technique, where they use a local
spatial memory, which may be more suitable. Try
http://www.cc.gatech.edu/grads/b/Tucker.Balch/papers/avoid_past.ps.Z
> Ph> I'm wondering how effective writing code to try to figure out how to
> Ph> place attractors and repulsors would be... Any ideas?
>
> Just one... you could leave this run over night. Have the unit start at a
> random location and go to a random destination and trace its path in a data
> element (well, it'd be best to use an array the same size as the map and add
> one to wherever the unit goes). Then, after it gets there or some sort of
> time elapses, you located the areas with the largest concentration of movement
> and put a beacon there that is proportional to the concentration of movement.
>
> As for determining where high concentrations are (keep in this is over an
> area), I'm sure there is a simple way as colour quantinization techniques do
> similar things in 3D.
>
> P.S. Yeah, I know I've been pushing this add-one-to-the-map idea over the edge
> lately. :)
Hmm...might not have to run over night. Imagine this:
The user makes a map (may even be random generated). When it is selected to be played,
your program initializes an equal size array to 0 and then charts shortest paths from
each edge to opposing edges (on a 64x64 map, 128 paths are calculated). It adds 1 to
each cell crossed each time. This could be calculated real quick on loading the map, and
would give you some good info on Hotspots (passes, roads, etc) and ambush points (a
hotspot with dead zones next to it would be an obvious place to look).
Hey! Sunir! This is sounding interesting. Anyone have any ideas on evaluating the data
generated by this system?
This could be performed recursively as well....calculate a second pass from the edges to
the biggest hotspots, to show where the enemy is liable to be while on his way to the
hotspots, and where to suspect ambushes, and where to lay them...of course, depending on
your game system/map design, edges might not be the best starting points for this second
pass. Other hotspots might be, or cities, or whatever.
This map could be used as a bias adjustment for burnt circles, too.
There must be a big flaw in this somewhere...it sounds too easy <g>.
-Darren Reid
> That's Bryan Stout's presentation; he lurks around here too sometimes.
>
> Bryan did a great job of covering all the major pathing algorithms and
> approaches, and had a little tool that demostrated each. You could watch
> how (normal) Djkstra's "spirals" out looking for a solution vs. A*'s
> "searchlight" method. It was very cool and easily one of the best
> presentations at the CGDC.
Cool! I didn't go to that seminar, as I had other things to do...Bryan was very visible
in a few of the roundtables that I attended, and introduced himself to me eventually
(guesse I was visible too <g>). I would love to look at that tool...I'll have to email
him and see if he'll share it.
Mail sent with Portico for Excalibur
What's wrong with a simple recursive algorithm to solve the shortest path thing? I did one
up in a couple hours after work one day (in QBasic, no less!), and it seems to find paths
plenty fast in any semi-open landscape.
I can only assume that A* uses some other method, or it wouldn't get stuck. Is it really
all that good?
Here I am, poking out of lurking. (My mom's in town to see the new baby;
sometimes things like that keep me away from even lurking.)
I also like that web page.
> Bryan did a great job of covering all the major pathing algorithms and
>approaches, and had a little tool that demostrated each. You could watch
>how (normal) Djkstra's "spirals" out looking for a solution vs. A*'s
>"searchlight" method. It was very cool and easily one of the best
>presentations at the CGDC.
Thank you.
>: An interesting point that I just realized: A* is faster than Dijkstra, but
>: not always as good...it uses heuristics, and tries to go towards the target
>: and around obstacles. This can lead to it finding a more direct but
>: possibly more expensive (in movement points) path than Dijkstra...and this
>: side-effect might be actually useful to someone who is trying to simulate
>: "realistic" unit movement over "perfect" unit movement.
>
> Agreed, although Bryan did demonstrate a nasty concavity situation
>which A* took a *lot* longer to solve the Dijkstra. Six of one, half dozen
>of the other....
I guess I didn't explain some things well enough. [Terminology: A* sorts the
candidate nodes based on the lowest f(n) = g(n) + h(n). g(n) is the (shortest)
cost from start to n, h(n) the estimate of the shortest cost from n to goal.]
As long as h(n) is never greater than the true cost from n to the goal, A* will
*always* find a shortest path. (I say "a", not "the", because several paths
may have the same cost.)
Now, A* is not a single search, but a class of searches, depending on the type
of heuristic function h(n) being used. If h(n) is always zero, we have
Dijkstra's algorithm, which counts the cost from start but makes no estimate of
the remaining cost. IOW, Dijkstra's *is* an A* search.
I think Darren is referring to the Best-First search, which does the opposite,
namely sorting only on h(n), ignoring the actual cost of the search. It can be
a lot faster than the A* searches, but can end up with a lousy path.
If the h(n) function is no longer guaranteed to be an underestimate, then the
search is not truly "A*", but only "A" (a bit of nomenclature I didn't mention
in the lecture or proceedings). The weightier h(n) is, the more A performs
like a best-first search; the lighter, the more like Dijkstra's. How one
weights h(n) would depend on the tradeoff between time of search and quality of
path that is best for the game.
Steve, I can't think of anything I did which showed A* being slower than
Dijkstra's, since A* at its worst was exactly Dijkstra's; while with a high
h(n) it went even faster. Can you describe what you remember better?
Bryan Stout
bst...@interramp.com