In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts called agents, which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title.[2]
The work, which first appeared in 1986, was the first comprehensive description of Minsky's "society of mind" theory, which he began developing in the early 1970s. It is composed of 270 self-contained essays which are divided into 30 general chapters. The book was also made into a CD-ROM version.
In the process of explaining the society of mind, Minsky introduces a wide range of ideas and concepts. He develops theories about how processes such as language, memory, and learning work, and also covers concepts such as consciousness, the sense of self, and free will; because of this, many view The Society of Mind as a work of philosophy.
The book was not written to prove anything specific about AI or cognitive science, and does not reference physical brain structures. Instead, it is a collection of ideas about how the mind and thinking work on the conceptual level.
Minsky first started developing the theory with Seymour Papert in the early 1970s. Minsky said that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks.[3]
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.
This is a preprint of an article published inComputers and Artificial Intelligence, vol. 22:6, pp. 521-543, 2003.1. Introduction The functions performed by the brain are the products of thework of thousands of different, specialized sub-systems, the intricateproduct of hundreds of millions of years of biological evolution. Wecannot hope to understand such an organization by emulating thetechniques of those particle physicists who search for the simplestpossible unifying conceptions. Constructing a mind is simply a different kind of problemof how to synthesize organizational systems that cansupport a large enough diversity of different schemes, yet enable themto work together to exploit one another's abilities. [1]
What is the human mind and how does it work? This is the question that Marvin Minsky asks in The Society of Mind [2]. He explores a staggering range of issues, from the composition ofthe simplest mental processes to proposals for the largest-scalearchitectural organization of the mind, ultimately touching on virtually every important question one might ask about human cognition. How do we recognize objects and scenes? How do we use words and language? How dowe achieve goals? How do we learn new concepts and skills? How do weunderstand things? What are feelings and emotions? How does 'commonsense' work?
In seeking answers to these questions, Minsky does not search for a'basic principle' from which all cognitive phenomena somehow emerge, for example, some universal method of inference, all-purposerepresentation, or unifying mathematical theory. Instead, to explain the many things minds do, Minsky presents the reader with a theory thatdignifies the notion that the mind consists of a great diversity ofmechanisms: every mind is really a 'Society of Mind', a tremendouslyrich and multifaceted society of structures and processes, in everyindividual the unique product of eons of genetic evolution, millennia of human cultural evolution, and years of personal experience.
Minsky introduces the term agent to refer to the simplestindividuals that populate such societies of mind. Each agent is on thescale of a typical component of a computer program, like a simplesubroutine or data structure, and as with the components of computerprograms, agents can be connected and composed into larger systemscalled societies of agents. Together, societies of agents canperform functions more complex than any single agent could, andultimately produce the many abilities we attribute to minds.
Minsky's vision of the mind as a society gives the reader a familiaryet powerful metaphor for organizing the great complexity of the humanmind, for a society is almost by definition not a uniform or unifiedsystem, and instead, is composed of a great many different types ofindividuals each with a different background and different role to play. Yet we must be carefulthe societies of The Society of Mind should notbe regarded as very much like human communities, for individual humansare 'general purpose', and individual agents are quite specialized. Sowhile the concept of a society is a familiar notion, this metaphor isonly a starting point, and the theory raises a host of questions abouthow societies of mind might actually be organized.
This article examines the Society of Mind theory. We first give thereader a brief sense of the history of the development of the theory.From where did these ideas originate? Then, we will return to thequestions that the theory raises, and describe some of the mechanismsthat it proposes. What are agents? How do they work? How do theycommunicate? How do they grow? Finally, we consider related developments in Artificial Intelligence since the publication of the Society ofMind.
The Society of Mind theory was born in discussions between MarvinMinsky and Seymour Papert in the early 1970s at the MIT ArtificialIntelligence Lab. One of the world's leading AI labs, its explorationsencompassed diverse strands of research including machine learning,knowledge representation, robotic manipulation, natural languageprocessing, computer vision, and commonsense reasoning. As a result, itwas perhaps clearer to this small community than any other at the timethe true complexity of cognitive processes.
The severity of this issue may have been confronted for the firsttime in the famous 'copy-demo' project. Toward the end of the 1960s,Minsky and Papert and their students built one of the first autonomoushand-eye robots. Its task was to build copies of children'sbuilding-block structures it saw through cameras using a robotic hand.Minsky recalls:
Both my collaborator, Seymour Papert, and I had long desired to combine a mechanical hand, a television eye, and a computer into arobot that could build with children's building-blocks. It took severalyears for us and our students to develop Move, See, Grasp, and hundredsof other little programs we needed to make a working Builder-agency...It was this body of experience, more than anything we'd learned aboutpsychology, that led us to many ideas about societies of mind. [2,Section 2.5]
In trying to make that robot see, we found that no singlemethod ever worked well by itself. For example, the robot could rarelydiscern an object's shape by using vision alone; it also had to exploitother types of knowledge about which kinds of objects were likely to beseen. This experience impressed on us the idea that only a society ofdifferent types of processes could possibly suffice. [2, Postscript andAcknowledgement]
Ultimately, these kinds of experiences led Minsky and Papert tobecome powerful advocates of the view that intelligence was not theproduct of any simple recipe or algorithm for thinking, but ratherresulted from the combined activity of great societies of morespecialized cognitive processes. However, there were few ideas at thetime for how to understand and build systems that engaged in thousandsof heterogeneous cognitive computations. The conventional view within AI for how problem-solving systems should be built could well besummarized by this statement by Allen Newell from 1962:
But in Minsky and Papert's experience, this 'explorer' was beingoverwhelmed by the sheer magnitude of tasks and subtasks encountered inordinary commonsense activities such as seeing, grasping, or talking.The emergence of this unanticipated procedural complexity demanded atheory for how such systems could be built.
Some of Minsky's early thoughts about how to approach this problem appear in his famous paper A Framework for Representing Knowledge [4], in which he considers a variety of ideas for how to organize thecollections of the procedural and declarative knowledge needed to solvemany commonsense problems such as recognizing visual scenes andunderstanding natural language. He summarizes the motivation for framesas follows:
The 'chunks' of reasoning, language, memory, and 'perception'ought to be larger and more structured, and their factual and procedural contents must be more intimately connected in order to explain theapparent power and speed of mental activities.
The following essay was written by Scott Fahlman (in 1974 or1973? ), when a student at MIT. It is still one of the clearest imagesof how societies of mind might work. I have changed only a few terms.Fahlman, now a professor at Carnegie-Mellon University, envisioned aframe as a packet of related facts and agencieswhich can include otherframes. Any number of frames can be aroused at once, whereupon all their itemsand all the items in their sub-frames as wellbecome availableunless specifically canceled. The essay is about deciding when to allowsuch fragments of information to become active enough to initiate yetother processes.
Systems of frames with attached procedures were thus the ancestor ofthe concept of an agent. So while the Society of Mind theory had not yet been given a name in 1974, the roots of the theory were clearly present in the work that was going on at the time. Related notions in theprogramming language community, such as the development of Smalltalk and other object-oriented programming languages, were inspiring people todiscover the advantages of new, more 'cellular', ways to think aboutorganizing programs [5].
c80f0f1006