Artificial Intelligence
 

Artificial intelligence (AI) is the field of engineering which builds systems, primarily computer systems, to perform tasks which require intelligence. The key aspects of intelligence around which AI research is usually focused include automated reasoning, decision making, machine learning, machine vision, natural language processing, pattern recognition, planning, problem-solving, and robot control. This field of research has often set itself ambitious goals, seeking to build machines which can "out think" humans in particular domains of skill and knowledge, and has achieved some success in this. Some researchers have even speculated that it is possible to build machines which can imitate human behavior in general, but this has yet to meet with any success. In part this is because nobody knows what makes humans intelligent, and in part because of the computational intractability of the available techniques in unconstrained domains.

A long discussion amongst researchers in AI has been devoted to delimiting the kinds of tasks which can be properly considered intelligent. The search for a formal definition of intelligence was begun by the mathematician who first formalized the definition of the computer, Alan Turing. In a 1950 paper, "Computing Machinery and Intelligence," Turing outlined an operational test for intelligence, one in which intelligence is determined purely by the behavioral responses of a system to the input it is given (see Turing Test). Several philosophers have argued that computer programs which imitate intelligent behavior, or computational systems in principle, cannot be genuinely intelligent (Dreyfus 1979, and Searle 1980), but only mimic intelligence in the way a parrot mimics language without really understanding what it is saying.

An operational definition is not the only possible definition, however. There is a philosophical tradition going back to Aristotle which has invested in the notion that logical reasoning, or rationality, is the supreme expression of higher mental faculties. This logicist tradition accordingly suggests that an intelligent machine should be able to know and reason according to the requirements of an ideally rational, and flawless logical mind. But no human is perfectly logical and rational. Most psychologists believe that this is not simply because humans make mistakes, rather it is because they use thought strategies other than logic, such as visual imagination, metaphorical thought and estimation. Much of the early AI work of the 1950s and 1960s was devoted to the logical approach and produced a complete theorem-proving algorithm for first-order logic (Robinson 1965). Its influence is still felt, though most logic-based systems incorporate some form of probability estimation.

Some researchers have suggested that it is better to first understand how humans actually think and solve problems, and then to build computers which use those same strategies. This naturalist tradition is closely allied to the science of cognitive psychology, as it depends on psychological research for its engineering designs. Simon and Newell (1972) were the greatest advocates of this approach. They use a method called protocol analysis to study how humans solve certain formal problems in sufficient detail to program a computer with the same algorithm. Unfortunately, their methods cannot always determine how humans are thinking, especially when problem solving draws on experience, skill and knowledge which the problem-solver uses but cannot easily explain to others or may not even be aware of.

Yet another approach is less concerned with perfecting or imitating human intelligence, but is instead more concerned with finding good solutions to difficult technical problems. Often the solutions of these problems require knowledge of the domain, reasoning about multiple factors, learning, or complex analysis of perceptual or sensory data-abilities attributable to intelligent creatures which are now susceptible to various technological solutions with programmed computers. This approach is the one taken by most of the people developing artificial intelligence techniques and designing intelligent systems. Many of these researchers have attempted to distance themselves from the bold and inflated claims of AI and to emphasize their intent to build useful tools by calling their work Cognitive Engineering or Human-Computer Intelligent Interaction (Winograd and Flores 1986).

The first fully developed attempt at intelligent computation was the McCulloch and Pitts logical neural network (see McCulloch, Warren). This work led to the field of study called Cybernetics, which later spun-off into Neural Networks after it was pushed aside by AI. The phrase "artificial intelligence" was coined by John McCarthy, then a graduate student at Princeton, at a summer workshop held at Dartmouth in 1956. This two month workshop marks the official birth of AI and brought together influential researchers who would nurture the young science over the next several decades: Marvin Minsky, Claude Shannon, Arthur Samuel, Ray Solomonoff, Oliver Selfridge, Allen Newell and Herbert Simon.

The first significant AI program, the Logic Theorist, was presented by Newell and Simon (1956) at the Dartmouth workshop. Logic Theorist proved theorems of mathematical logic from a given set of axioms and a set of rules. This program was followed up by the General Problem Solver (Newell and Simon 1961) which demonstrated that the technique of proving theorems could be applied to all sorts of problems by defining the theorem to be proven as the "goal," and conducting a search to find a series of moves which lead from what is already known to the goal that is sought. This technique can work well for simple problems, but since the total number of alternative moves which are possible grows exponentially in the number of steps to a solution, the technique quickly breaks down.

It was recognized early that random searching was an inefficient way to find solutions, and by the early 1970s, new insights into the nature of computational complexity and the theory of NP-completeness pointed to the intractability of other logic and graph-based search techniques for general problem-solving. These challenges led Newell and Simon to suggest that AI research on problem-solving ought to focus on finding good heuristics to use when searching. A heuristic is a search strategy, and a good heuristic helps one find a solution faster by reducing the number of dead-ends encountered during a search. For example, "Always try the best alternative first" is a good heuristic, the trouble is in knowing which alternative is the best! Apart from the search for intelligent heuristics, researchers turned to the study of limited task domains, or micro-worlds as they came to be called, such as analogy problems or planning manipulations in a world consisting only of wooden blocks sitting on a table (called the blocks world), as well as games and expert systems.

Expert systems are systems which utilize a large amount of knowledge about a small area of expertise in order to solve problems in that domain. The first such system was DENDRAL (Buchanan, et al. 1969), which could infer the structure of a molecule if given its chemical formula and information from a mass spectrogram of the molecule. This difficult task was achieved by DENDRAL due to its being provided with rules-of-thumb, and a tricks for recognizing common patterns in the spectrograms. These rules were developed in collaboration with Joshua Lederberg, a Nobel prize-winning chemist. Unlike DENDRAL, the next major expert system MYCIN used rules which could only be obtained from human experts and which incorporated uncertainty as probability weights. MYCIN used some 450 such rules to diagnose infectious blood diseases (Buchanan and Shortliffe 1984). Expert systems have proven to be one of the most successful applications of AI so far. Thousands of expert systems are currently in use for medical diagnoses, servicing and trouble-shooting mechanical devices, as well as information searches. Other commercially successful applications include planning and machine learning.

Planning involves finding the simplest and most efficient plan to achieve a certain goal. Generally, this is done by knowing the current state of the "world," the desired state of the world, and a set of actions, called operators, which can be taken to transform the world. The Stanford Research Institute Problem Solver (STRIPS) was an early planner which used a language to describe actions that is still widely used and enhanced (Fikes and Nilsson 1971). The STRIPS operators consist of three components: 1) the action description, 2) the preconditions of the action, or the way the world must be before the action can be taken, and 3) the effect, or how the world has been changed by the action having been taken. To develop a plan, the system then searches for a reasonably short or cost-efficient sequence of operators which will achieve the goal.. Planning systems are used widely to generate production schedules in factories, to find the most efficient ways to layout circuits on microchips or to machine metal parts, and to plan and coordinate complex projects involving many people and organizations such as space shuttle launches.

As the name implies, machine learning develops techniques for machines to actually learn from experience and improve over time. Machine learning was first conceived by W. Ross Ashby (1940), while the first successful program for learning was Samuel's checkers-playing program (1959). Most forms of machine learning involve using statistical techniques to infer rules and discover relationships. The most popular kind of learning systems are Neural Networks, though there are many others. Machine learning is useful in solving problems in which the rules governing the domain are difficult to discover, and where a large amount of data is available for analysis. In these respects, machine learning is closely related to Data Mining. Recently, a new subdiscipline called Computational Learning Theory has developed to study the complexity of learning and offer measures of learnability and accuracy for various classes of learning tasks (Kearns and Vazirani 1994).

Another important area of AI research has been Natural Language Processing (NLP). NLP attempts to provide computers with the ability to understand natural human languages, such as English or Russian. Work in this area draws heavily on theories of grammar and syntax borrowed from Computational Linguistics, and attempts to decompose sentences into their grammatical structures, assign the correct meanings to each word, and interpret the overall meaning of the sentence. This task turns out to be very difficult because of the possible variations of language, and the many kinds of ambiguity which exist. The applications of successful NLP programs would include machine translation from one natural language to another, or natural language Human Computer Interfaces. A great deal of success has already been achieved in the related areas of Optical Character Recognition and Speech Recognition which employ machine learning techniques to translate text and sound inputs into words, but stop short of interpreting the meaning of those words.

Game-playing programs have done much to popularize AI since its beginning. Programs to play simple games like tic-tack-toe (noughts and crosses) are trivial, but games such as checkers (draughts) and chess are more difficult and appear to require intelligence. Chess-playing automata based on clock-making technology have been around for centuries-von Kempelen's "Chess-playing Turk" is purported to have beaten Napoleon in 1809. At IBM, Samuel began working in 1952 on the program which would be the first to play tournament level checkers, a feat it achieved by learning from its own mistakes (Samuel 1959). The first computer to beat a human grandmaster in a chess match was HITECH (Berliner 1989). And in May of 1997, IBM's DEEP BLUE computer beat the top-ranked chess player in the world, Gary Kasparov. Unfortunately, success in a single domain such as chess does not translate into general intelligence, but it does demonstrate that seemingly intelligent behaviors can be automated at a level of performance which exceeds human capabilities.

AI is an active area of research, as well as the basis for many useful computer systems. While it does not appear that it will succeed in constructing a computer which displays the general mental capabilities of a typical human any time soon, it has produced programs which can perform apparently intelligent tasks at a much greater level of skill and reliability than humans. AI thus promises to automate many of the mundane but necessary tasks now performed by people.
 
 
 

by Peter M. Asaro

1983 words
 
 
 

For Further Research
 
 
 

Barr, A., E. A. Feigenbaum, and P. R. Cohen, Eds. The Handbook of Artificial Intelligence, Vols. 1-4. Stanford and Los Altos, CA: HeurisTech Press and Kaufmann, (1981-89). 
 

Crevier, D. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books, 1993. 
 

McCorduck, Pamela. Machines Who Think. San Francisco, CA: W. Freeman and Company, 1979.
 

Newborn, M. and M. Newborn. Kasparov Versus Deep Blue: Computer Chess Comes of Age. 1996.
 

Nilsson, N. J. Artificial Intelligence: A New Synthesis. San Mateo, CA: Morgan Kaufmann, 1998. 
 

Russell, S. J., and P. Norvig. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall, 1995. 
 

Shapiro, S. C., Ed. Encyclopedia of Artificial Intelligence. 2nd ed. New York: Wiley, 1992. 
 

Webber, B. L., and N. J. Nilsson, Eds. Readings in Artificial Intelligence. San Mateo, CA: Morgan Kaufmann, 1981. 
 
 
 

References
 

Ashby, W. Ross. "Adaptiveness and Equilibrium." Journal of Mental Science, Vol. 86: 478-483, 1940.
 

Berliner, H. J. "HITECH Chess: From Master to Senior Master with No Hardware Change." In MIV-89: Proceedings of the International Workshop on Industrial Applications of Machine Intelligence and Vision (Seiken Symposium), pp. 12-21, 1989.
 

Buchanan, B. G., G. L. Sutherland, and E. A. Fiegenbaum. "Heuristic DENDRAL: A Program for Generating Explanatory Hypotheses in Organic Chemistry." In B. Meltzer, D. Michie and M. Swann, eds. Machine Intelligence 4, Edinburgh, UK: Edinburgh University Press, pp. 209-254, 1969.
 

Buchanan, B. G., and E. H. Shortliffe, Eds. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, MA: Addison-Wesley, 1984.
 

Dreyfus, H. L. What Computers Can't Do: A Critique of Artificial Reason. New York: Harper and Row, 1979. Reprinted as What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.
 

Fikes, R. E. and N. J. Nilsson. "STRIPS: A New Approach to the Application of Theorem-Proving to Problem-Solving." Artificial Intelligence, 2 (3-4): 189-208, 1971.
 

Kearns, M., and U. V. Vazirani. An Introduction to Computational Learning Theory. Cambridge, MA: MIT Press, 1994.
 

McCarthy, J. "Programs with common sense." Proceedings of the Symposium on Mechanisation of Thought Processes, vol. 1. London: Her Majesty's Stationery Office, pp. 77-84, 1958. Reprinted in Minsky, M. L., Ed. Semantic Information Processing. Cambridge, MA: MIT Press, pp. 403-418, 1968. 
 

Newell, A., and H. A. Simon. "The logic theory machine: a complex information processing system." IRE Transactions on Information Theory IT-2, 3: 61-79, 1956. 
 

Newell, A., and H. A. Simon. "GPS, A Program that Simulates Human Thought." In Billing, H. editor, Lerenende Automaten, pp. 109-124, 1961. Reprinted in E. A Feigenbaum and J. Feldman, Eds., Computers and Thought. New York, NY: McGraw-Hill, 1963, pp. 279-293. 
 

Newell, A., and H. A. Simon. Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall, 1972. 
 

Robinson, J. A. "A Machine-oriented Logic based on the resolution principle." Journal of the Association of Computing Machinery, 12: 23-41, 1965.
 

Turing, A. M. "Computing machinery and intelligence." Mind 59: 433-460, 1950. 
 

Samuel, A. L. "Some Studies in Machine Learning Using the Game of Checkers." IBM Journal of Research and Development, 11(6): 601-617, 1959. 
 

Searle, J. R. "Minds, Brains and Programs." Behavioral and Brain Sciences, 3: 417-457, 1980.
 

Winograd, T., and F. Flores. Understanding Computers and Cognition: A New Foundation for Design. Norwood, NJ: Ablex, 1986.