McCulloch, Warren S. (artificial neural networks)
 

Warren S. McCulloch (1898-1968) was an American psychiatrist and neurophysiologist who co-founded Cybernetics. He summed up his life's work by saying that he had always sought to answer the question: "What is a number that a man may know it, and a man, that he may know a number?" (McCulloch 1961). His greatest contributions to computer science include an early existence proof for universal computers (Turing Machines) and a model of artificial neural networks that stands as a theory which explains how a biological brain could perform logical calculations and serves as the basis of Artificial Intelligence (AI).

McCulloch studied theology, philosophy, psychology and mathematical physics at Haverford College and Yale University and, after serving as a Naval officer during the first World War, received his masters degree in psychology from Columbia University for a thesis on experimental aesthetics. He soon became disappointed by the division of psychology into behavioristic, psychoanalytic, and introspective camps and the antagonism which existed between these, and decided that all three could be replaced by a complete neurophysiological theory of the human mind.

McCulloch had been much impressed by Russell and Whitehead's Principia Mathematica and, thinking that a proper understanding of psychology would have to be made formally rigorous by treating mental concepts as logical propositions, felt that the only place to begin this formal theory of the mind was by empirical inquiry into the nervous system. He continued his studies and received an M.D., with specialties in psychiatry and the physiology of the brain, from Columbia University in 1927.

He then turned to the study of the neurological basis of mental disorders as a physician at Bellevue Hospital from 1928-30, and at Rockland State Hospital for the Insane from 1930-1932. In order to study the structural properties of nervous activity, McCulloch moved to the Yale Medical School. There he began research under Dusser de Barenne to map neuronal projections of the sensory and motor cortex areas of live animal brains by observing the electrical activity stimulated by localized injections of strychnine. Finally in 1940, he moved to Chicago to accept a position as professor of psychiatry and clinical professor of physiology and direct the new Research Laboratory at University of Illinois Medical School's Neuropsychiatric Institute.

There, McCulloch met two students, Jerome Lettvin and Walter Pitts, who would become his closest colleagues, collaborators and friends. Lettvin was a young medical student interested in the use of mathematics in biology and the electrical properties of the brain. Pitts was a polymath who had never graduated from high school nor enrolled in college, but began studying logic with the Vienna Circle philosopher Rudolph Carnap at the University of Chicago after running away from home at the age of 14. In 1941, the 18-year-old Pitts began working with McCulloch on a theory of the mind which would show how neurons in the brain could represent logical propositions.

In their 1943 paper, McCulloch and Pitts demonstrated that a suitably configured network of mathematically idealized neurons could represent any well-formed logical proposition and compute any function representable in their logical calculus. Moreover, any such network could simulate a "memory" if its outputs were fed back into its inputs. Thus, their neuron nets were a kind of universal computer (Turing 1937, see On Computable Numbers (Turing)).

While mathematically idealized, the logical neurons they devised simulated what was then known of the electrical behavior of biological neurons, including synaptic transmission, axonal conduction, ionic thresholds, nervous impulses, excitation and inhibition. They took advantage of the fact that neurons exhibited an "all-or-none" property of firing or not firing to set up a crucial analogy to the binary "true-or-false" property of propositions in Boolean logic. This mathematical model of artificial neural networks thus provided a compelling basis for a theory of how the brain, by being formally equivalent to a computer at the level of synaptic transmission, was capable of performing sophisticated logical reasoning.

This paper became one of the foundations of the new field of Cybernetics, and McCulloch became one of its principle leaders. As chairman, McCulloch presided over the famous Macy conferences on "Circular Causal and Feedback Mechanisms in Biological and Social Systems." In 1952, McCulloch moved to Research Laboratory of Electronics at the Massachusetts Institute of Technology to join the other prominent leader of the movement, Norbert Wiener, and to set up a research group to study the circuit theory of the brain. Unfortunately, the two men had a falling out by the end of that year, which was incited by rumors and egos while Wiener was away on sabbatical in Mexico. Despite this, McCulloch remained at MIT for the rest of his career, and during that time many noteworthy students came to his lab to study the mathematical properties of natural and artificial neurons, including the mathematician who later developed fractals, Benoit Mandelbrot, and the computer scientists Manuel Blum, Stuart Kaufmann, Marvin Minsky and Seymour Papert.

Even though his work formed the basis of the field of AI, and its sub-field of artificial neural networks, McCulloch maintained a somewhat ambiguous relationship with these areas of research. Initially, McCulloch saw the mechanical simulations of AI as a means for experimenting with theories of how the brain worked. By the time that computer programs were being developed which could play checkers and do other forms of logical problem-solving in the 1950's, McCulloch began to reject many of these projects as efforts aimed at developing toys and monsters rather than a greater understanding of the mind. He believed that much of this research merely assumed that the mind was a computer rather than show how the brain performed particular calculations, and sought to demonstrate the various mental-like tasks that computers could perform rather than attempt to propose or test any empirical theories of the mind or brain.

Much of McCulloch's dissatisfaction with AI may have been a reaction to John von Neumann's 1951 paper on "A General and Logical Theory of Automata." The paper was addressed directly to McCulloch and Pitts' 1943 paper, and issued a serious challenge to it as a theory of mind which McCulloch would never feel had been fully answered to. The paper further outlines a mathematical theory of automata, of which logical automata like the artificial neuron were only a limited kind.

The problem von Neumann articulated was that the logical network could only represent concepts which were completely and precisely specified, while in fact most of our ideas and knowledge are not specifiable in this way. Worse, if they were so specified it might require more bits of information than there are atoms in the universe to represent them all. Thus, merely showing that some limited domain of mental performance could be completely and precisely specified, as most AI projects sought to do, didn't really answer this fundamental challenge. McCulloch's preoccupation with this problem led him to investigations of multi-valued, higher-order and probabilistic logics.

In 1947 Pitts and McCulloch had written another influential paper, "How we know Universals: The Perception of Auditory and Visual Forms." This paper outlined a theory of how an artificial neural network can perform a kind of abstraction or statistical induction. That is, it could obtain a representation of a universal concept like "apple" by seeing many instances of particular apples. This idea laid the groundwork for the use of neural networks as models for learning generalized rules from specific instances, and as models of sensory perception in pattern recognition and classification tasks. Through the 1950's, McCulloch still held out hope that a better understanding of the behavior of networks of artificial neurons would be able to elucidate the inner workings of the brain in experiments such as Lettvin et al. (1959).

Then research on neural networks exploded during the1960's after the psychologist Frank Rosenblatt introduced his Perceptron model in 1958. Whereas the research during the 1940's and 50's had been primarily mathematical, with just a few analog neural circuits being built, the Perceptron was a learning rule for a statistical simulation of neural computation run on a mainframe computer. It became the basis of the modern neural network. As digital computers became more readily available in the 1960's, research in machine learning turned away from logic circuits and towards building statistical simulations in digital computers. These networks essentially solve the problem of class-membership, as in "Which class of things is this object a member of?" Many kinds of problems can be reduced to this problem, and many neural network simulations were constructed during this period to solve all sorts of pattern recognition problems involving the classification of handwritten characters, speech, and visual forms.

Rather than treating the inputs to a network as logical propositions, the basic approach of the Perceptron model is to encode an observed example as a pattern of activation in the input layer of a network. This activation then propagates to an output layer over a set of weighted connections between the layers. The classification of the example can then be read off of the output layer as another pattern of activation. To train a network, one simply adjusts the weights according to a learning algorithm (increase weights connected to correct answers, decrease weights connected to wrong answers) until the input examples cause the correct output patterns to be produced. Once trained, such a network can automatically produce a classification for any example, though its correctness depends heavily on the statistics of the examples it was trained on. The Perceptron thus became the paradigm for a vast amount of research done during the 1960's.

This research abruptly halted when the U.S. military stopped funding these projects, and resources were shifted towards research into the methods of symbolic logic and the development of expert systems during the 1970's. A commonly cited cause of this shift was the publication of a lengthy criticism of the Perceptron model by two long-time neural network researchers and students of McCulloch, Minsky and Papert (1969). The irony of this is compounded by the fact that the two had established the AI Laboratory at MIT in 1968, which continued to receive abundant military funding. Their criticism amounted to a mathematical observation that the only functions which a single-layer network of Perceptrons can learn is a line that divides a plane (or a plane that divides a hyperplane) into multiple regions, where each region represents a classification and each observed example is a point in the region. For his own part, McCulloch was impressed by the work which led up to this book, and hoped that it would discourage many of the "charlatans" who had taken up research in neural networks. Ultimately he believed that neural networks could only explain some of the phenomena of sensory perception, but not the whole of the mind.

In the mid-1980's, there was a renewed interest in neural networks (Rumelhart, et al.1986). This was made possible by a new technique for error-backpropagation which made multi-layer networks feasible. Given enough layers and neurons in a network, and enough training examples and time, even the most complex classification structures could be learned. A great debate ensued between the two camps of the logic based "Traditional AI Approach" and the connectionist "Parallel and Distributed Processing Approach" at a time when the government agencies funding AI research were becoming increasingly skeptical of the entire field. The two camps eventually recognized that each had a place, and agreed to peacefully co-exist. Since then, numerous hybrid systems have attempted to combine the techniques of the two approaches.

After becoming disenchanted with these two fields of research which he had inspired, McCulloch spent the remainder of his career searching for a neurological theory of a much more obscure faculty of the mind: consciousness. He was dissatisfied with notions of consciousness which made it into the central authority of a hierarchically organized mind. His efforts involved detailed studies of the reticular formation of the brain stem, the part of the brain responsible for the most basic vital functions like breathing and metabolic control. He believed that consciousness was related to the ability of an organism to switch rapidly and completely between a small number of basic modes of interacting with world, such as: eat, drink, sleep, fight, flee, hunt, search, defecate, urinate, mate, groom and nest (McCulloch 1969). He used as his operative metaphor the peculiar decentralized control of naval fleets, in which any ship in the fleet can come to be in command of the whole fleet if it comes into contact with the enemy before the others, or was in the best position to take command. He theorized that one specialized sub-structure of the brain could arrest control from the others when its functions became imperative to the survival of the organism, as when swallowing automatically shuts off breathing to prevent choking. Thus, knowledge and necessity become the basis of the shifting authority of mental command.
 
 
 

Biography

Warren Sturgis McCulloch. Born 16 November, Orange, New Jersey, USA, 1898. Studied at Haverford College, Pennsylvania and Yale University, 1917-21. Served as an officer in U.S. Naval Reserves, 1919-1921. Received and M.A. in Psychology, Columbia University, 1923. Received an M.D. from Columbia Medical School, 1927. Was a practicing physician at Bellevue Hospital, 1928, and at Rockland State Hospital for the Insane, 1930. Studied the structure of the cortex with Dusser de Barenne at Yale Medical School, 1934. Became director of the Research Laboratory at University of Illinois Medical School's Neuropsychiatric Institute, 1940. Wrote "A Logical Calculus of the Ideas Immanent in Nervous Activity" with Walter Pitts, establishing the "McCulloch-Pitts Model" of the neuron, 1943. Chairman of the Macy Conferences on Circular Causal and Feedback Mechanisms in Biological and Social Systems, 1946-1953. Wrote "On How We Know Universals: The Perception of Auditory and Visual Forms" with Walter Pitts, establishing inductive generalization in neural networks, 1947. Moved to the Research Laboratory of Electronics at the Massachusetts Institute of Technology to study the circuitry of the brain, 1952. Continued teaching, researching and publishing on the mind, brain and cybernetics at MIT and from his farm in Old Lyme, Connecticut until his death, 1968.
 
 
 

by Peter M. Asaro

2144 words
 
 
 

For Further Research

Anderson, James, and Edward Rosenfeld, Editors. Talking Nets: An Oral History of Neural Networks. Cambridge, MA: MIT Press, 1998.

Lettvin, Jerome Y., Humberto R. Maturana, Warren S. McCulloch and Walter Pitts „What the Frog's Eye Tells the Frog's Brain." Proceedings of the IRE, 47(11) (November 1959). Reprinted in The Collected Works of Warren S. McCulloch, vol. 4, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 1161-1172.

Lindgren, N. "The Birth of Cybernetics-An End to the Old World: The Heritage of Warren S. McCulloch." Innovation, 6:12-15, 1969.

Moreno-Diaz, R., and J. Mira-Mira , editors. Brain Processes, Theories and Models: An International Conference in Honor of W. S. McCulloch 25 Years After His Death. Cambridge, MA: MIT Press, 1996.

McCulloch, Warren S. Embodiments of Mind. Cambridge, MA: MIT Press, 1965.

McCulloch, Warren S. The Collected Works of Warren S. McCulloch, Volumes 1-4. Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989.

Perkel, D. H. "Logical Neurons: The Enigmatic Legacy of Warren McCulloch." Trends in Neuroscience, 11(1):9-12, 1988.
 

References

McCulloch, Warren S., and Walter Pitts. „A Logical Calculus of the Ideas Immanent in Nervous Activity." Bulletin of Mathematical Biophysics, 5 (1943): 115-133. Reprinted in The Collected Works of Warren S. McCulloch, vol. 1, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 343-361.

McCulloch, Warren S. "What is a Number, That a Man May Know it, and a Man That He May Know Number?" General Semantics Bulletin, Numbers 26 and 27. Lakeville, CN: Institute of General Semantics, 1961. Reprinted in The Collected Works of Warren S. McCulloch, vol. 4, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 1225-1243.

McCulloch, Warren S. "The Reticular Formation Command and Control System." Information Processing in the Nervous System. Edited by K. N. Leibovic, 1969: 297-307. Reprinted in The Collected Works of Warren S. McCulloch, vol. 4, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 1322-1332.

McCulloch, Warren S. "Where is Fancy Bred?" Lectures on Experimental Psychiatry. Proceedings of the Bicentennial Conference on Experimental Psychiatry, Pittsburgh, PA, March 5-7, 1959, sponsored by the Western Psychiatric Institute and Clinic. Pittsburgh, PA: Universityof Pittsburgh Press, 1961. Reprinted in The Collected Works of Warren S. McCulloch, vol. 4, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 1211-1224.

Minsky, Marvin, and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press, 1969.

Pitts, Walter, and Warren S. McCulloch. "On How We Know Universals: The Perception of Auditory and Visual Forms." Bulletin of Mathematical Biophysics, 9:127-147, 1947. Reprinted in Embodiments of Mind and The Collected Works of Warren S. McCulloch, vol. 2, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 530-550.

Rosenblatt, Frank. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, DC: Spartan Books, 1962.

Rumelhart, David, John McClelland and the PDP Research Group. Parallel and Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations. Cambridge, MA: MIT Press, 1986.

Turing, Alan M. „On Computable Numbers, with an Application to the Entscheidungsproblem." Proceeding of the London Mathematical Society, 42: 230-265, 1937.

Von Neumann, John, "The General and Logical Theory of Automata," in Cerebral Mechanisms in Behavior, The Hixon Symposium, Edited by L. A. Jeffress, New York, NY: John Wiley & Sons, 1951. Reprinted in Papers of John von Neumann on Computers and Computing Theory, William Asprey and Arthur Burks (eds.), Cambridge, MA: MIT Press, 1987, pp. 391-431.