Strategy question

11 views
Skip to first unread message

Pico Geyer

unread,
Dec 6, 2011, 1:40:57 AM12/6/11
to open...@googlegroups.com
Hi all.

I've been playing around with opennero for a day or two, and I think I
understand the basics.
However I'm having a really hard time creating effective strategies.

Let me start with a specific simple case.
I'm trying to create a sniper character, so the first thing I do, is
create a target (turret) and then place my spawn location a little bit
away.
I then tune the reward parameters to Approach Enemy = -40 (Rewarded
for being far from the enemy) and Hit Target = 100.
So without any obstacle and only 1 enemy, things seem to work ok.

But now I try to add only a small wall in the way, I'm hoping that my
snipers will move a bit to the left or right and then still fire at
the target.
But this never seems to work, some of the snipers do the right thing
for a very small amount of the time, but most of them just fire at the
wall.
They also backpedal as far as they can (even if it means no longer
being able to hit the target).
I've attached a screenshot of the typical scenario.
Does anyone have some suggestions for me on how to make this work.

Thanks in advance,
Pico

Shooting_wall2.png

Philip

unread,
Dec 7, 2011, 4:44:58 AM12/7/11
to opennero

Hi,

I have exactly the same problem (assume you use only Q-learning).
Obviously we do something wrong ...

I tied also to get the same effect as the sample video (all run to the
target), but when I put the wall my training results start to differ
from the one in the video.


Filip

Jonathan Wheare

unread,
Dec 7, 2011, 6:15:59 AM12/7/11
to open...@googlegroups.com
Hi There,

The issue as I read it is that the obstacle detection radar has a range
of 100 units, while agents can detect enemies at significantly larger
distances. Lacking any method to know that there is an obstacle on the
way they will just blaze away merrily without effect.

The only method I have been able to find to combat this is to direct the
agents to advance on the enemy and train them to avoid obstacles when
they do.

I was musing about adding a sensor that indicated when fire was on
target but ineffective, indicating that the agent should attempt to find
and clear any obstacles, but that would be beyond the scope of this
competition.

J.

Filip Filipov

unread,
Dec 7, 2011, 1:30:49 PM12/7/11
to open...@googlegroups.com
On Wed, Dec 7, 2011 at 13:15, Jonathan Wheare <jonatha...@gmail.com> wrote:
>
>
> Hi There,
>
> The issue as I read it is that the obstacle detection radar has a range of
> 100 units, while agents can detect enemies at significantly larger
> distances.  Lacking any method to know that there is an obstacle on the way
> they will just blaze away merrily without effect.
>
> The only method I have been able to find to combat this is to direct the
> agents to advance on the enemy and train them to avoid obstacles when they
> do.
>
> I was musing about adding a sensor that indicated when fire was on target
> but ineffective, indicating that the agent should attempt to find and clear
> any obstacles, but that would be beyond the scope of this competition.
>
> J.

Hi

Thank you for explaining!

about the range I thought it was 300 because of the constants.py file. [1]
<snip>
"# maximum vision radius for most sensors
MAX_VISION_RADIUS = 300
"
</snip>

But considering it as 100 gives good results.

[1] http://code.google.com/p/opennero/source/browse/trunk/mods/NERO/constants.py

Filip

Pico Geyer

unread,
Dec 8, 2011, 5:10:58 AM12/8/11
to open...@googlegroups.com
On Wed, Dec 7, 2011 at 8:30 PM, Filip Filipov
<pilif...@googlemail.com> wrote:
> On Wed, Dec 7, 2011 at 13:15, Jonathan Wheare <jonatha...@gmail.com> wrote:
>>
>>
>> Hi There,
>>
>> The issue as I read it is that the obstacle detection radar has a range of
>> 100 units, while agents can detect enemies at significantly larger
>> distances.  Lacking any method to know that there is an obstacle on the way
>> they will just blaze away merrily without effect.
>>
>> The only method I have been able to find to combat this is to direct the
>> agents to advance on the enemy and train them to avoid obstacles when they
>> do.
>>
>> I was musing about adding a sensor that indicated when fire was on target
>> but ineffective, indicating that the agent should attempt to find and clear
>> any obstacles, but that would be beyond the scope of this competition.
>>
>> J.
>
> Hi
>
> Thank you for explaining!
>
> about the range I thought it was 300 because of the constants.py file. [1]

Yeah, I made the same assumption.
It would make more sense if the agent can detect walls and enemies at
the same range.
It's a rather strange sensor that can detect an enemy at a large
distance but not a wall at a closer distance :)

Thanks for the inputs, I'll look at the code a bit more and try
playing with getting my agents to move a bit closer.

Regards,
Pico

Igor Karpov

unread,
Dec 8, 2011, 7:58:32 AM12/8/11
to open...@googlegroups.com
All these sensors are modeled after the original torque Nero and they are not necessarily the last word once the tournament is done with. In the original nero, the population files could also configure the sensors, which we can also do in python, plus there are other sensors which we could implement.

Generally we found that sensors and features in general have a huge effect on all kinds of learning algorithms and the space of behavior they search. Your ideas such as this one are right on target, no pun intended.
Reply all
Reply to author
Forward
0 new messages