On 1 December 2013 00:47, Ian Qvist <
qvis...@gmail.com> wrote:
> I thought HyperNEAT utilized CPPNs, to generate larger neural
> network topologies with the properties of a CPPN (repetition, symmetry and
> so on).
Plain NEAT is works by evolving an ANN directly (e.g. mutations that
add nodes, connections, etc. one at a time).
HyperNEAT adds another layer, it evolves an ANN in exactly the same
way but then 'queries' it and uses the answers to those queries to
build another ANN. The first ANN has inputs for coordinates in a 3D
space, so we can input the coordinates and the response is what we
should place there (if anything). For HyperNEAT we pre-define a set of
neurons and their positions in the 3D space, we can then 'ask' the
first ANN "should neuron 1 be connected to neuron 2, etc.".
So the first ANN is called a CPPN (Pattern Producing Network), and the
predefined neurons and their coordinates is called a substrate. So the
CPPN defines how to connect up the neurons in the substrate, and the
substrate can be 2D or 3D (or more if you want to get really
confusing!).
It sounds that what you want is a 2D substrate, and you have a grid
instead of a few neurons scattered throughout the space, correct? So
you want the CPPN to accept (x,y) as input, and to output a value that
indicates a material type, yes?
Whereas the HyperNEAT CPPNs accepts both a source and target position
(x1,y1,x2,y2), because it outputs whether *two* neurons (at the
specified coords) should be connected, and that is in the form of a
connection weight (strength) output. Very low weight == no connection.
The d parameter therefore is specific to hyperneat, because in
addition to the source and target coords as input, we can optionally
add a distance input as well - the distance between (x1,y1) and
(x2,y2), therefore I don't think the d input applies in your case.
This is just some background to establish that we're understanding
each other. We can cover other questions later if necessary.
Colin