All AI experts, am I correct here?
Neural net methods = current chess data base methods = brute force
Theoretically, perhaps the neural net could learn and be more
efficient, in the long run, is that the main difference? Would a neural net
that effective have to span the globe first?
For elementary models, they can be interesting. But
at the human decision level, has anything been built
that even remotely can predict any human behavior as
far as research psychology is concerned? That may be unfair to ask. I
don't think prediction of any human behavior has been achieved at any time by
any research in psychology except for IQ scores, ie, the results are
solidly above the usual 50-60% variance explained levels.
I'm just asking. At the human motor level, I have
seen some solid work. I just don't know of anything
above a very elementary, primitive level.
Oh, about chess, the answer is no. And yes, you can bet
your bootie that neural nets have been tried. But
aside from theory, practically they don't work yet.
My guess is that computational power to get a neural
net up and running is probably far more aggravation
than getting a brute-force data base up and running.
One of the mathematicians here suggests he has a proof
that the data base is the most efficient method.
Adam Sundor
I don't understand your suggestion. How would a genetic algorithm
work out the tactical details of a plan?
There have been some attempts to write learning chess programs, but
the ones I know about don't separate strategy and tactics as sharply
as you suggest. See Morph, NeuroChess and SAL on my Machine Learning
in Games page:
http://forum.swarthmore.edu/~jay/learn-game/index.html
>One of the benefits is the ability to
>"teach" the programme by feeding it GM games and chess puzzles, and each the
>ability to learn from it's mistakes.
I think that learning from play against computer opponents of similar
strength (e.g. self-play) will usually work better than learning from
human expert performance. Humans think so differently from any learning
mechanism we can write that human performance tends to be more confusing
than helpful for a program.
If you're talking about learning from a human teacher, instead of from
human games, that's different. I don't know any work like that.
Jay Scott, j...@forum.swarthmore.edu
Machine Learning in Games:
http://forum.swarthmore.edu/~jay/learn-game/index.html
The problem with this is that the GA requires the evaluation of each
candidate solution by a cost function (a measure of how good the
solution is). What would you suggest the cost function would be? I
presume this would have to be the result of a n-ply deep move search
with exactly the same problems that the current programs have. This
means that a GA would have a time penalty against current programs
(both would have to search but the GA would have to go on doing the
replication of chromosomes etc). By the way, the same would apply to
any other optimisation techniques. There are some tasks that are better
done by heuristics and I think chess may be one of them.
Of course, this is just my opinion and I never did any chess
programming but I am currently employed in a research project that
includes using GAs for optimisation.
--
Pedro Mendes
home: pe...@enzyme.demon.co.uk , work: p...@aber.ac.uk | Aberystwyth,
http://gepasi.dbs.aber.ac.uk/pedro/prmhome.htm | Dyfed, Wales,
PGP mail welcome! (public key on demand) | U.K. (Eur.Un.)
Good point. I'm thinking that the optimization would be performed after a
game is completed (i.e., programme loads it's weights and equations and such
in order to play a game. Any changes to these aspects of the programme would
be modified after a game; similar to one doing a post-hoc evaluation of one's
own games; perhaps here a deep-ply search might be used for comparison of
optimal solution?) So the delays due to weight adjustments/replication and
mutations would not be encorporated into game time. Futhermore, since a
ply-search is not being performed, move calculation might be fairly quick (the
idea is that the equations find the solution the ply-search would have come up
with but don't have to do the actual ply-search at game time, furthermore, if
a ply-search makes errors; i.e., loses the game; the equations can be modified
based on the resulting winning play! which would result in the equation making
a better move selection than the ply-search would have). Anyway, thanks for
responding.
- Jeff Hamm
>
>Ultimately, I think the optimal program would simply understand the
>rules, but have a way of "teaching" itself [somehow] by playing itself
>over and over...
>
How is this a problem? Have it play itself(very simple).
>In article <4hghas$g...@mothra.westworld.com>, edp...@westworld.com (Ed
>Parry) writes:
>>
>>Ultimately, I think the optimal program would simply understand the
>>rules, but have a way of "teaching" itself [somehow] by playing itself
>> -------------
>>over and over...
>>
> How is this a problem? Have it play itself(very simple).
You must have decided to neglected considering the other requirement:
It has to LEARN from the games, not just play itself over and over
mindlessly.
Ep
Ed Parry
edp...@westworld.com
Ed Parry <au...@lafn.org>
EBBS HQ - 818-891-9350 - 28,8kb
If you feed it all possible positions:
about 3612 (KK) x 62 x 61 x 1/4 (mirroring king) = .. positions,
then it probably will play the right move.
> Andrew Walker
10 lines hardcoded C and 1 array of 64 bytes,
is also sufficient for this.
What about KP - KNN?
It seems this is harder to make a function for, that
can do it (if possible) in 50 moves. Haven't managed so far.
Vincent Diepeveen
vdie...@cs.ruu.nl
--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+