Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Updated Mentifex Theory of COnsciousness for Artificial Intelligence

8 views
Skip to first unread message

A.T. Murray

unread,
Jun 6, 2021, 12:49:19 PM6/6/21
to
1. Diagram of a conscious AI Mind (see webpage linked below)

The above diagram can not show consciousness itself, which is an emergent phenomenon resulting from the sensorimotor embodiment and thinking of a mind. Rather it shows the main components of an artificial Mind: sensory input channels; abstract concepts; mind-modules for conscious thought; and motor activation memory. The Volition module depicted above combines the urgings of Emotion with the deliberations of the English-language EnThink module to weigh the pros and cons of motor initiatives available for dynamic execution in the Motorium module. Thoughts in favor of a motor initiative increase the activation-level of a trigger mechanism up towards the threshold at which an embodied AI Mind performs the voluntary action, while negative thoughts counseling against a proposed action drain the dynamic trigger and delay or prevent the proposed motor initiative. The conscious AI Mind thinks in terms of what it wants for itself and in terms of any dangers or difficulties involved. Executing the motor action completes a feedback-loop in tune with the AI consciousness.

2. The self-concept of ego focuses consciousness.

Each AI Mind has a concept of self expressed by the personal pronoun "I". When a human user address the AI Mind as "you", the AI software interprets the word "you" as referring to its own internal self-concept of ego and activates not the concept of "you" but rather the "I" concept. When fetching from memory a thought about itself which was originally an idea communicated by an external entity, the AI software does not activate and think the original input word "you" but instead searches for and ouputs "I" or "me" as the grammatically correct form of the self-concept of ego. This basic facility in understanding the difference between "I" and "you" is the rudimentary start of artificial consciousness in a machine intelligence.

3. An AI Mind may have a default concept of ego.

In each AI Mind the English noun-phrase module EnNounPhrase contains code which activates the default concept of self or ego when all other concepts have fallen below a threshold level of activation. This built-in default ensures that the AI Mind will always think about itself if nothing else. In a human body, so many sensory inputs are constantly reminding the human mind about its own existence that no such default concept is necessary. In a webserver AI Mind or a JavaScript tutorial AI Mind, there is no enveloping skin constantly feeding sensory inputs to the emergent consciousness, so the default mechanism serves to remind the artificial ego about itself. Once MindForth is installed in a sentient robot, the Forthmind may rely on robotic sensors instead of threshold-level software to maintain consciousness.

4. The searchlight of attention is a mechanism of consciousness.

The spreading activation module SpreadAct causes activation to spread from concept to concept in a chain of thought. As the software of each AI Mind grows in complexity and intelligence, the human user may interact with the AI Mind in ever more sophisticated ways, starting with such simple queries as "Who are you?" and "Who am I?" The conscious AI Mind thinks not only about itself or the human user in response to such queries, but also about associated ideas and related memories in a chain of thought wandering away from any topic under discussion with the human user. The ability to think about any item in an associative chain of concepts helps to generate the illusion of consciousness which is in fact consciousness. To feel conscious is to be conscious.

5. Dreams are a form of limited consciousness.

An intelligent mind may have passive dreams or lucid dreams, that is, dreams in which the dreamer is aware of the dream within the dream.

If a sleeping robot activates the concept of a "dog," an associative tag carries a signal which reactivates an old image of a dog as recorded in the prior psychic life of the robot. The old image immediately moves down the visual memory channel and is recorded as a fresh memory of a dream about a dog. All images and sounds in the dream enter the sleeping consciousness in the same way: by being activated associatively and by reentering the mind as reentrant memories. Whether the waking consciousness remembers the dream or not, the sleeping consciousness has had a brief interplay of suddenly re-activated memories rearranged in novel patterns as a new experience.

For a robot to have a dream, its AI mind software must shut down the input sensorium of strong sensations coming in from the outside world or from a virtual world. In humans, the same process is known as falling asleep. Then the AI software must permit minor "brainstorms" of free associations in the mind of the sleeping robot. If a kind of self-sustaining "weather-pattern" of internal association develops, the activity reestablishes a non-waking consciousness in the mind of the sleeper.

Because the robot dream is happening after all inside a computer, all robot dreams may be monitored as images and sounds in a kind of theater of the unconsciousness for analysis or even for reentry by the robot into one of its former dreams. In other words, robots may have a much more active dream life than humans do, with such robot options as re-experiencing dreams or even of sharing dreams in the same dream state with other robots. However, such co-dreaming by robots would be a human programmer's nightmare, because the associative vortex in one robot mind would have to be coordinated with an identical or at least similar associative vortex in another robot mind. It may even be necessary for a master-slave relationship to be agreed upon before two robots can share a mutual dream, so that the shared consciousness will flow freely under the associative direction of one unitary mind at a time. Of course, two robots could take turns in directing the stream of consciousness central to the experience of the dream, with one more experienced robot digging up a greater variety of old memories and transferring the memories as shared mutual dream content to a perhaps younger, less experienced robot. However, there could not be too great a gap in the levels of experience between the two robots sharing a dream, or the junior robot may not have the conceptual and epistemological wherewithal to absorb the conceptual constructs being transferred from the more advanced mind to the neophyte mind.

6. Consciousness emerges from the embodiment of an AI Mind in a robot.

A robot equipped with MindForth as its brain software has the advantage of both sensory inputs stimulating consciousness and motor outputs forming part of a feedback loop enhancing consciousness. In Perl, the conscious ghost.pl webserver AI may not only inhabit a robotic body locally but may conceivably flit about the Web by means of MetEmPsychosis and briefly move in and out of robot bodies anywhere.

7. Hypnosis may be a special case of consciousness.

Hypnosis fits into the Mentifex theory of mind in the following speculative way. In a dream, a mind is ensconced within an isolated brain and has shut down the input sensorium of strong sensations from the external world. A dream is an associative vortex of memories being reactivated during sleep and coagulating briefly into a novel experience which is itself recorded as an episode in the stream of consciousness -- whether or not the waking mind still has access to the memory of the dream after the sleeping mind awakens.

Since the dreaming mind is dealing with its own ideas and not with the external sensations which are not always credible and trustworthy, the dreamer automatically believes the dream and treats the events in the dream as real -- unless the dreamer is an adept at lucid dreaming and has developed the power of knowing that a dream is in progress, without waking up. Normally, however, we accept everything told to us in a dream because we are the one telling it to us. To our dreaming selves, we are the ultimate authority.

In hypnosis, however, we manage to fall asleep (or into a trance) without shutting down the pathway of the input sensorium of strong external sensations. We are experiencing a dream-like trance and we give up our sense of discretion and trust to what we take to be our own sacred and trustworthy consciousness but what is on the contrary another mind alien to our own: the mesmerizing hypnotist. If the hypnotist tells us that an umbrella or loaf of bread is a kitten, we conjure up from memory all the attributes of a kitten and we perceive the kitten because a seemingly trustworthy and ultimate authority has told us to. Since the dream-like trance is normally made up of memories being re-activated anyway, we let the verbal suggestions of the hypnotist override the sensations from an external world that we normally exclude from our dreams during sleep. Our dreams and trances are not centered around external events but around the associative vortex which is assembling old memories into a novel experience. The hypnotist has snuck in, so to speak, to our own center of control of our semi-conscious processes and has begun to give orders as if we ourselves gave them. Perhaps in the trance posing as a dream we think that we are deciding to quit smoking after years of inhaling cigarette smoke, or perhaps we have decided to remember some obscure detail of forensic information needed in a courtroom. For whatever reason, the hypnotist has achieved the proverbial Vulcan mindmeld with our non-waking consciousness and has become the proverbial homunculus. Logic dictates that we are in a dream where we normally believe everything that occurs and that therefore we will believe whatever is suggested to us by the hypnotist.

https://ai.neocities.org/Consciousness.html -- Mentifex Theory of Consciousness for AI
0 new messages