http://www.rubinghscience.org/ai/evoloutline.html
June 2002
Outline of my approach for researching
mechanisms for learning and adaptation by means
of evolution of brain structures
I'm fascinated by artificial intelligence, memetics, evolution, and by
the philosophical questions related to human intelligence. I believe
that human intelligence, human consciousness, and all processes in the
human brain, can be explained fully mechanically. Creating artificial
intelligence which shows aspects of human-like intelligence I believe
to be the only way to prove that the human mind is 100% mechanical.
My model of human brain operation and human learning is that inside the
brain, items of knowledge undergo an evolutionary process of selection,
mutation, and elimination. Human inventiveness results from random
mutations in existing items of knowledge, and weeding out mistfit
mutations.
Moreover, the human brain exists not in isolation, but is in constant
interaction with the body and the environment outside of the brain: it
receives sensor input and sends out motor control output signals. That
is, the brain as a whole functionally plays same role as that of a
single McCulloch-and-Pitts like neuron, receiving sensor inputs on its
input terminals and sending out a motor signal. It is possible to
successfully control a simple autonomous robot with a brain consisting
of a very small number (about 2 to 10) of such neuron cells. Brains of
more ``intelligent'' entities would be constructed from more complex
combinations of neuron cells.
---
What I aim to do is to
extend the above picture of an artificial brain constructed from
neuron cells with the following:
- Constructing the brain in such a way that the neuron cells
grow automatically in response to the sensor inputs and possible
other feedback signals which the brain receives from its
environment (sensors in body parts and sensors receiving e.g.
visual input from outside the body). New neuron cells are grown in
places in the brain where there exists data that has to be (or can
be) ``made more sense of''. The rate, and location, of new neuron
cell growth, and the (initial) contents/structure of these new
neuron cells follows from (data associated with) the parent neuron
cells from which the new neuron grows.
- A slight extenstion of the above mechanism allows neuron
cells to operate in ways in which they act as executable
programs that undergo a process of evolution. Each neuron
cell is the embodiment of one single item or aspect of behaviour,
somewhat similar to a separate subroutine or line of code in a
big computer program.
Neuron cells that store useful information about aspects
of successful behaviour, are ``reinforced'', which means that a
``quality figure'' associated with the neuron cell is incremented.
Neuron cells that are not useful to generate successful behaviour
are ``degraded'' (meaning decrementing its quality figure) and are
eventually eliminated.
The feedback input received by the brain after the robot has
executed an action generated by the brain, for example a pleasure
or pain feedback signal, is used to alter the quality figures of
the sub-set of neurons which were used to generate the behaviour
that caused the feedback signal.
Neurons which, when used, consistently result in pleasure feedback
are called ``good'' cells, and are high-quality cells. Likewise,
neurons that consistently result in pain feedback, are called
''bad'' cells; and these ``bad'' cells are also high-quality
cells, because they too store relevant information about successful
robot behaviour, namely about specific types of behavior that must
be avoided. Both ``good'' and ``bad'' high-quality cells contain
learned knowledge that is relevant to successful behaviour of the
robots; the ``good'' and ``bad'' cells apply to the same domain of
information and/or behaviour (they represent complementary aspects
of the same information), and it seems probable that these two
kinds of neuron cells must always co-evolve.
When a neuron cell does not receive fairly consequently the same
type of feedback (that is, if it receives alternately pain and
pleasure feedback) then this means that it does not store
meaningful information; that is, that this particular
``subroutine'' is not a useful part of the programming of the
robot. The inconsistent feedback signals cause such neuron cells
to degrade their quality figure.
High-quality cells are more likely to act as parents producing
new neuron cells (which may be mutations of the parent cell,
mixes of various parent cells, or may be derived in other ways
from parent cells).
- Unlike classical types of evolution in which the entities
that evolve exist side by side, in the above type of brain
contains not only neuron cells that exist in the same way side by
side, but there is additionally the interesting, and for an
intelligent brain in my opinion very essential, possibility that
neurons are connected in layers in a hierarchical fashion, where
a layer of neurons grown ``on top of'' a more basic, earlier,
layer is concerned with data and patterns on a more ``abstract''
level. Bottom layers would do preprocessing and recognize some
crude ``symbols'' among the mass of raw sensory input received by
the robot; and higher layers of neurons would then operate in
terms of these higher-level symbols instead of the raw signals
at the terminals of the brain. This means that lower layers
recognize the concepts in which the higher levels ``think''.
It is here important that the
construction of this stacking of layers of neurons is different
from a ``classical'' neural network of a few layers of neuron cells
connected head-to-tail where the outputs of layer i are
fed to the inputs of layers i + 1. Instead,
motor and sensor symbols are all considered as inputs, and are
presented simultaneously to the neuron cells. The motor symbol
used is a random or ``trial'' choice. A neuron cell ``fires'' if the
combination of sensor and motor symbols ``fits'' to the cell. If
the cell fires, the cell sends an entirely new symbol (the unique
name of the cell) on as input to the next, higher, layer of
neurons. If the total pattern of sensor and motor symbols
presented to the brain gets absorbed by a collection of ``good''
neuron cells, then the action associated with the ``trial'' motor
symbol is executed; if the pattern is absorbed by ``bad'' neuron
cells, the action is inhibited (not executed, or sent a ``stop''
signal). If the pattern cannot successfully be absorbed by the
brain, then this means that the pattern contains new information
-- as a result of which, new brain cells are grown at the places
where the ``misfits'' between existing neurons and the new data
are greatest.
- The above process of generation and change of neuron cells
is autonomous : it is not specific to the exact nature and
number of the sensor and motor input and output terminals hooked
into the brain. The brain adapts dynamically to whatever new
sensors and motors are inserted (or removed) from the system.
This means that this brain design works for any kind of robot.
Installing a blank brain into a totally new kind of robot should
result in a robot that autonomously learns to behave successfully
in whatever environment it is let loose in. The robot brain is
the part of the robot that has this adaption of the possibilities
of robot's physique to the opportunities in its environment as
its specific purpose : The brain is the part that accomodates and
adapts itself to act as an intermediary between both.
- The notion that the measure of optimality for an artificial
brain is that the brain should control the robot in such
a way that the behaviour of the robot is good for the
survival of the robot. The measure or criterium for what
is intelligence in the brain is inherently connected to
behaviour. Measuring and assessing intelligence is impossible
without observing behaviour. If a brain is not connected
to sensors and robot arms, or equivalent subsystems, through
which the brain interacts with an environment, it's impossible
to speak of intelligence. Survival of the intelligent
robot also always presupposes an environment in which the
robot is placed and with which it interacts : intelligent
behaviour means that the robot learns from elements in its
environment how to behave in that environment in a way
that is optimal for its own survival.
- Parallels can be drawn between colony organisms (such
as ant colonies) and the way in which the separate neuron cells in
the brain ``cooperate'' to create an apparent coherent and
intelligent behaviour of the whole. Individual neuron cells are
very simple, and from a combination of interacting simple elements
a whole is formed that operates more ``intelligently''. This can
explain how the ``intelligence'' in a (human) brain arises
(emerges) from simple components which each by themselves are
not very ``intelligent''.
- Consciousness, similarly as intelligence, must be measured by
observing behaviour. It is not plausible to measure
``consciousness'' directly, nor is it in my opinion plausible to say
that an entity that is conscious somehow directly ``feels'' its own
state of conscience (like it e.g. feels pain or visually sees an
object in front of it). If we observe that an entity behaves in a
certain manner that exceeds a certain minimal intelligence, for
example if we observe that an entity recognizes its image in a
mirror as itself, then we as observers label that entity
``conscious''. This means (in my opinion) that consciousness is
simply a more intense form of ``intelligence''. I believe that
starting with a single neuron cell and then adding ever more neuron
cells, while all the time letting the brain interact (through a
suitable robot physique) with a suitably complex and challenging
environment will gradually result in first an ``intelligent'' robot,
and then (after even more layers of neurons and simultaneous
learning are added) a ``conscious'' robot. That is : I see no
reason why ``consciousness'' could not simply be a more evolved and
more intense form of ``intelligence''; engaging in creation of robot
brain designs aimed to create an intelligent robot by necessity
would also shed light on the philosophical problems associated with
consciousness.
- A longer-term goal for me is to try to connect the above with
memetics. Memes, like the neuron cells above, are also
entities that undergo evolution -- evolution of memes in a society
is another way to describe evolution of culture. I feel that it
ought to be possible to ``hook up'' sentences in a (maybe simple and
artificial) language to the sybols that are presented to the brain
as described above. It might be possible to construct in this way
in a relatively simple way a brain that communicates with its
environment not through sensor and motor signals, but instead
through sentences in a language.
---
A terse outline of the current state of my work is given in
http://www.rubinghscience.org/cv/mydesign.out
(a message I posted on the pcp-discuss mailing list of the PCP project
(see http://pespmc1.vub.ac.be/NUTSHELL.html)).
Detailed
information on a specific part of the actual software that I've
created as a part of this work can be found in
http://www.rubinghscience.org/aiclub/toc.html,
particularly in
http://www.rubinghscience.org/aiclub/doc_cellc.txt.