top of page


Public·7 members

The Society Of Mind

In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts called agents, which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title.[2]

The society of mind

The work, which first appeared in 1986, was the first comprehensive description of Minsky's "society of mind" theory, which he began developing in the early 1970s. It is composed of 270 self-contained essays which are divided into 30 general chapters. The book was also made into a CD-ROM version.

In the process of explaining the society of mind, Minsky introduces a wide range of ideas and concepts. He develops theories about how processes such as language, memory, and learning work, and also covers concepts such as consciousness, the sense of self, and free will; because of this, many view The Society of Mind as a work of philosophy.

The book was not written to prove anything specific about AI or cognitive science, and does not reference physical brain structures. Instead, it is a collection of ideas about how the mind and thinking work on the conceptual level.

A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

What is the human mind and how does it work? This is the question that Marvin Minsky asks in The Society of Mind [2]. He explores a staggering range of issues, from the composition ofthe simplest mental processes to proposals for the largest-scalearchitectural organization of the mind, ultimately touching on virtually every important question one might ask about human cognition. How do we recognize objects and scenes? How do we use words and language? How dowe achieve goals? How do we learn new concepts and skills? How do weunderstand things? What are feelings and emotions? How does 'commonsense' work?

In seeking answers to these questions, Minsky does not search for a'basic principle' from which all cognitive phenomena somehow emerge, for example, some universal method of inference, all-purposerepresentation, or unifying mathematical theory. Instead, to explain the many things minds do, Minsky presents the reader with a theory thatdignifies the notion that the mind consists of a great diversity ofmechanisms: every mind is really a 'Society of Mind', a tremendouslyrich and multifaceted society of structures and processes, in everyindividual the unique product of eons of genetic evolution, millennia of human cultural evolution, and years of personal experience.

Minsky introduces the term agent to refer to the simplestindividuals that populate such societies of mind. Each agent is on thescale of a typical component of a computer program, like a simplesubroutine or data structure, and as with the components of computerprograms, agents can be connected and composed into larger systemscalled societies of agents. Together, societies of agents canperform functions more complex than any single agent could, andultimately produce the many abilities we attribute to minds.

Both my collaborator, Seymour Papert, and I had long desired to combine a mechanical hand, a television eye, and a computer into arobot that could build with children's building-blocks. It took severalyears for us and our students to develop Move, See, Grasp, and hundredsof other little programs we needed to make a working Builder-agency...It was this body of experience, more than anything we'd learned aboutpsychology, that led us to many ideas about societies of mind. [2,Section 2.5]

In trying to make that robot see, we found that no singlemethod ever worked well by itself. For example, the robot could rarelydiscern an object's shape by using vision alone; it also had to exploitother types of knowledge about which kinds of objects were likely to beseen. This experience impressed on us the idea that only a society ofdifferent types of processes could possibly suffice. [2, Postscript andAcknowledgement]

In the middle 1970s Papert and I tried together to write a book about societies of mind but abandoned the attempt when it became clearthat the ideas were not mature enough. The results of that collaboration shaped many earlier sections of this book. [2, Postscript andAcknowledgement]

Some workers in Artificial Intelligence may be disconcerted bythe "high level" of discussion in this paper, and cry out for morelower-level details. [...] There are many real questions about overallorganization of the mind that are not just problems of implementationdetail. The detail of an AI theory (or one from Psychology or fromLinguistics) will miss the point, if machines that use it can't be madeto think. Particularly in regard to ideas about the brain, there is atpresent a poverty of sophisticated conceptions, and the theory below isoffered to encourage others to think about the problem. [8]

Minsky sees the mind as a vast diversity of cognitive processes eachspecialized to perform some type of function, such as expecting,predicting, repairing, remembering, revising, debugging, acting,comparing, generalizing, exemplifying, analogizing, simplifying, andmany other such 'ways of thinking'. There is nothing especially commonor uniform about these functions; each agent can be based on a different type of process with its own distinct kinds of purposes, languages fordescribing things, ways of representing knowledge, methods for producing inferences, and so forth.

In the Society of Mind, mental activity ultimately reduces to turning individual agents on and off. At any time, only some agents in asociety of mind are active, and their combined activity constitutes the'total state' of the mind. However, there may be many differentactivities that are going on at the same time in different agencies, and Minsky introduces the term 'partial state of mind' to describe theactivities of subsets of the agents of the mind.

K-lines. K-lines are the most common agent in theSociety of Mind theory. The purpose of a K-line is simply to turn on aparticular set of agents, and because agents have many interconnections, activating a K-line can cause a cascade of effects within a mind. ManyK-lines are formed by 'chunking' the net effects of a problem solvingepisode, so that the next time the system faces a similar problem, itnot only has the previous solution as a starting point, but also theexperience of deriving that solution, which includes memories of falsestarts, unexpected discoveries, and other lessons from the previousexperience that aren't captured by the final solution alone. ThusK-lines cause a Society of Mind to enter a particular rememberedconfiguration of agents, one that formed a useful society in the past.K-lines are a simple but powerful mechanism for disposing a mind towards engaging relevant kinds of problem solving strategies, forms ofknowledge, types of goals, memories of particular experiences, and theother mental resources that might help a system solve a problem. [Footnote 2]

Polynemes invoke partial states within multiple agencies,where each agency is concerned with representing some different aspectof a thing. For example, recognizing an apple arouses an'apple-polyneme' that invokes certain properties within the color,shape, taste, and other agencies to mentally manufacture the experienceof an apple, as well as brings to mind other less sensory aspects suchas the cost of an apple, places where apples can be found, the kind ofsituations in which one might eat an apple, and so forth. Polynemessupport the idea that a thing's 'meaning' is best expressed not in terms of any single representation, but rather in a distributed way acrossmultiple representations.

Difference-Engines. What does it mean to 'solve' aproblem? Solving a problem can be regarded as reducing or eliminatingthe important differences between the current state and some desiredgoal state. Minsky proposes a simple machine called a difference-engine that embodies this problem solving strategy. Difference-engines operate by recognizing differences between the current state and the desiredstate, and acting to reduce each difference by invoking K-lines thatturn on suitable solution methods. The difference-engine idea is basedon Newell and Simon's early 'GPS' problem solver [11]. Minsky elevatesthe GPS idea to a central principle, and one might interpret Minsky assuggesting that we view the mind as a society of suchdifference-reducing machines that populate the mind at every level. Butultimately, there is no single mechanism for buildingdifference-engines, because there is no single way to compare differentrepresentations.

A-brains and B-brains. Some types of unproductivemental activity are not specific to any particular method, such as'looping' or 'meandering', which might occur in any problem solvingmethod that engages in search. Minsky introduces the notion of the'B-brain' whose job is not so much to think about the outside world, but rather to think about the world inside the mind (the 'A-brain'), so asto be able to notice these kinds of errors and correct them. Thisdivision of the mind into 'levels of reflection' is an idea that hasbecome even more central in Minsky's more recent theories. [Footnote 4]

How do societies of mind 'grow up'? Minsky suggests that mentalsocieties are constructed over time, and that the trajectory of thisprocess differs from person to person. He offers several potentialmechanisms for growth. 041b061a72


Welcome to the group! You can connect with other members, ge...
bottom of page