USC
About this Article
Written by: Jie Ma
Written on: April 6th, 2001
Tags: computer science
Thumbnail by: Alejandro zorrilal Cruz/Wikimedia Commons
About the Author
Jie Ma was an undergraduate student at the University of Southern California.
Also in this Issue
Dynamics of the M16 Assault Rifle Written by: Scott Schimmeyer
Engineering Kites Beyond FlightWritten by: Albin Cheenath
The New and Improved RealityWritten by: Lauren Chun
Stay Connected

Volume I Issue II > Silicon Smarts: Artificially Intelligent Computers
The field of artificial intelligence (AI) incorporates many disparate disciplines. In the past few decades, theoretical work done in the field has made technological breakthroughs, such as expert systems and interactive robots, possible. These artificially intelligence systems touch our everyday lives, and new AI developments hold even more promise to benefit mankind.

The Basics of AI

Artificial intelligence has been making unprecedented advances in recent years. This rapidly-developing discipline synthesizes a plethora of varied disciplines, including, but not limited to, non-technical fields such as biology, psychology, sociology, and even philosophy. Developments in artificial intelligence, subsequently, have profound effects on these fields. The complexity of the artificial intelligence field leaves many people confused about what it precisely entails. Here, we present a basic idea of what artificial intelligence is and where it is heading.

The Early Days of AI

The field of artificial intelligence is relatively young. The creation of Artificial Intelligence as an academic discipline can be traced to the 1950s, when scientists and researchers began to consider the possibility of machines processing intellectual capabilities similar to those of human beings (Fig. 1). Alan Turing, a British mathematician, first proposed a test to determine whether or not a machine is intelligent. The test later became known as the Turing Test, in which a machine tries to disguise itself as a human being in an imitation game by giving human-like responses to a series of questions. Turing believed that if a machine could make a human being believe that he or she is communicating with another human being, then the machine can be considered as intelligent as a human being.
Alejandro zorrilal Cruz/Wikimedia Commons
Figure 1: The field of artificial intelligence has become an academic discipline that considers the possibility that machines can have similar intellectual abilities as humans.
The term "artificial intelligence" itself was created in 1956 by a professor at the Massachusetts Institute of Technology, John McCarthy. McCarthy created the term for a conference he was organizing that year. The conference, which was later called the Dartmouth Conference by AI researchers, established AI as a distinct discipline. The conference also defined the major goals of AI: to understand and model the thought processes of humans and to design machines that mimic this behavior.
Much of the AI research in the period between 1956 and 1966 was theoretical in nature. The very first AI program, the Logic Theorist, was presented at the Dartmouth Conference and was able to prove mathematical theorems. Several other primitive AI programs with limited capabilities followed. One of these was "Sad Sam," a program written by Robert K. Lindsay in 1960 that understood simple English sentences and was capable of drawing conclusions from facts learned in a conversation [1]. Another was ELIZA, a program developed in 1967 by Joseph Weizenbaum at MIT that was capable of simulating the responses of a therapist to patients. ELIZA's ability to use natural language makes it appear intelligent. ELIZA was close to passing the Turing Test, albeit in a controlled environment, that is, a conversation between a patient and a psychiatrist.
With more and more successful demonstrations of the feasibility of AI, the focus of AI research shifted. Researchers turned their attention to solving specific problems in areas of possible AI application. This shift in research focus gave rise to the present-day definition of AI, that is, "a variety of research areas concerned with extending the ability of the computer to do tasks that resemble those performed by human beings," as V. Daniel Hunt puts it in his 1988 article "The Development of Artificial Intelligence" [2]. Some of the most interesting areas of current AI research include expert systems, neural networks, and robotics.

Expert Systems

The first area of AI application we explore is expert systems, which are AI programs that can make decisions normally left to human experts. A program called DENDRAL, developed at the Stanford Research Institute in 1965, was the grandparent of expert systems. Much like a human chemist, it could analyze information about chemical compounds to determine their molecular structure. A later program called MYCIN was developed in the mid-1970s and was capable of helping physicians in diagnosis of bacterial infections. It is often referred to as the first true expert system.
Expert systems are perhaps the most easily implemented and most widely used AI technology. Although the effects of such systems may not be readily apparent,they have had a tremendous impact on our lives. In fact, many of the computer programs we use today can be considered expert systems. The spell-checking utility in our word processor is an expert system, albeit a very simple one. It takes the role of a proofreader by reading a group of sentences, checking them against the known spelling and grammatical rules, and making suggestions of possible corrections to the writer. Expert systems, combined with robotics, brought about automation of the manufacturing process which accelerated production rate and reduced error. A typical assembly line that required hundreds of people in the 1950s now only requires ten to twenty people who supervise the expert systems that do the job. The pioneers in industrial automation are Japanese automobile manufacturers such as Toyota and Honda, with up to 80% automation of the manufacturing process.
The most advanced expert systems, like many other advanced technologies, are used extensively in military applications. An example is the next generation fighter plane of the U.S. Air Force: the F-22 Raptor (Fig. 2). The targeting computer aboard the Raptor takes the role of a radar controller by interpreting radar signals, identifying a target, and checking its radar signature against known enemy types stored in its database. The process is illustrated in Fig. 2. It is rumored that the computer system in the Raptor can automatically list the six most dangerous targets, while keeping track of tens of others. This helps the pilot decide at which target to fire his weapons. Thus, the use of expert systems can significantly reduce the workload of humans.
U.S. Air Force photo by Master Sgt. Kevin J. Gruenwald/Wikimedia Commons
Figure 2: The F-22 Raptor has a computer onboard that processes incoming information to reduce pilots' workload.
While expert systems can be extremely helpful to human beings, there are tasks that current expert systems simply cannot accomplish. To return to our past example, the spell-checking utility can check mechanics of an article. However, it cannot check all important aspects of an article such as content and logic. Thus, it is only a marginally helpful proofreader. It would be a much more competent proofreader if it could identify logical shortcomings and so on. To do so, an expert system must be able to make cognitive connections between objects. This leads us to the next area of AI research: neural networks.

Neural Networks

Another area of great interest is neural networks, which implement the ability to learn into a computer program. The ability to make connections between facts and draw conclusions is central to learning. Humans rely on what we call common sense to make such connections. However, something that is common sense to us may be very difficult to implement in a computer program. One such common sense case is making a causal connection; as Charles L. Oritz Jr. wrote, "The occurrence of an event is never an isolated matter. An event owes its existence to other events which causally precede it; an event's presence is, in turn, felt by certain collections of subsequent events" [3]. Each node in a neural network must be able to take a number of inputs, process them to determine the connections that need to be made, and send outputs to the relevant nodes determined in the previous step. A comparison of a node in a neural network and a human neuron is illustrated in Fig. 3 below.
Each processing element in a neural network receives a number of inputs and determines to which processing elements it should send the input, and outputs the processed data to those processing elements, much like a human neuron does.
The aforementioned "Sad Sam" program is an example of the principles of a neural network in action, though it is primitive and works with limited input. Sam is capable of drawing a conclusion from known facts, given the sentences: "Jim is John's brother" and "Jim's mother is Mary," Sad Sam was smart enough to understand that Mary must therefore be John's Mother [1]. While it is relatively easy to let a program make connections among a limited set of information, there are innumerable connections that can be made about things in the real world. The huge number of connections that can be made in the real world makes implementation of sophisticated neural networks a daunting task. A spin-off of the neural network problem is the fuzzy logic problem, which deviates from traditional yes-or-no type of Boolean logic. In fuzzy logic, values are no longer discrete and mutually exclusive; that is, a value can belong to two categories simultaneously. An example is when one talks about temperature: ninety degrees Fahrenheit is "hot" when one is talking about outdoor temperature, but for body temperature, it is abnormally "cold." Through the implementation of fuzzy logic, a neural network would be able to make that same judgment.
There are still many problems in neural network research, including creating algorithms to make the connections, to determine which sets of data should be connected, and even to abandon irrelevant data when necessary. Miscellaneous aspects of the human learning process can present challenges with the implementation of a neural network. The complexity of these problems is the reason why there remains much theoretical work to be done in the field. While a complete set of solutions lies beyond the scope of the theories and technology currently available, the principles and partial solutions of the problem have been implemented with great success. Deep Blue, the chess-playing program developed by IBM, is one of the few examples of an application of neural networking principles. It was capable of learning from previous games and predicting the possible moves of an opponent. As our understanding of the human brain and the learning process grows, so will our ability to create more effective algorithms of learning and making connections among known ideas.
The holy grail of AI research lies in the realization of the intelligent robot, a familiar figure to any follower of science-fiction literature, television, or films. As illustrated in these works, an intelligent robot will not only be able to help and serve us, it could also be offer companionship. What is science-fiction today may become reality tomorrow.

AI in Robotics

Robotics is the area of AI technology most attractive to the public. In fact, robotics could be the area where AI can be most beneficial to mankind. The use of industrial robots that do repetitive tasks accurately has already increased the productivity of assembly lines in manufacturing plants. The addition of artificial intelligence to these industrial robots could further boost their productivity by allowing them to do a wider variety of tasks and to do so more efficiently. In the future, nano-robots not much bigger than a will be able to enter the human body, repair damaged organs, and destroy bacterium and cancer tissues. Special-purpose robots such as bomb-defusing robots and space exploration robots can go into hostile environments and accomplish tasks deemed too dangerous for humans.
Robots with artificial intelligence can accomplish the above tasks with little human input, and thus do their jobs quicker and more reliably. The advantages offered by intelligent robots are self-evident. The implications of the technology could be as far-reaching as the invention of the light bulb or automobile. But like the aforementioned inventions, which were accompanied by electrocution and car accidents, the appearance of robots could carry with it unforeseen problems, but the benefits will far outweigh the disadvantages.
While the benefit of robots with AI is great, there are numerous technical hurdles encountered when implementing AI in a robot, many of which are being researched today. A robot must be capable of perception in order to interact with the world around it. The ability to see, hear, and touch can be implemented through cameras, infrared and ultrasound sensors, collision sensors, and other devices. While implementing these physical sensors is relatively simple, making the robot make sense of this information can be quite difficult. The earliest attempt to implement vision into a robot was carried out by a team of researchers at MIT in the mid- to late-1960s.
Later, similar work was done at Stanford. The result was a robot called SHRDLU that can see and stack boxes on a table and even answer questions about objects on the table. Such a robot was truly a breakthrough, for it not only was able to see three dimensional objects but also had a basic understanding of physics and was able to use this knowledge to accomplish work on its own. However, one must not forget that these robots can only operate in a limited environment with a few stationery geometric objects, which the researchers called "the micro-blocks world" [1]. The real world is far more complex, as it contains far more dynamic objects.

The Future of AI

An interesting design challenge in AI robotics field is the so-called "micro mouse," a small robot equipped with a variety of sensors that is able to navigate a maze. The micro mouse is especially adept in exploring different methods of making the robot aware of its surroundings. In order for a micro mouse to successfully navigate a maze, it must be able to recognize where it has been and be able to backtrack if necessary. With more mazes that approach the complexity of a real world environment, however, the task of recognizing each local environment becomes more difficult. Wai K. Yeap and Margaret E. Jefferies discussed the challenge of such a task in their article "Computing a Representation of the Local Environment" [4]. They proposed an algorithm to construct a raw cognitive map , so that a robot can recognize an environment it has previously encountered.
Lately, a great deal of effort has been invested in creating robots that can work in a team. Each robot in such a team is called an agent and the applications for such robots are numerous. They can accomplish tasks too difficult for a single robot, such as exploring and mapping a large area and repairing multiple damages in an industrial system. The Jet Propulsion Laboratory is currently working on the Distributed Rovers Project, which is aimed at sending teams of robotic rovers to Mars to explore and map the distant planet. It is almost certain that these robotic agent teams would become a great asset to the space program and eventually humankind.

A Perspective on AI

The field of artificial intelligence is truly a fascinating one. Like many other new technologies, AI is changing our lives everyday. It is quite possible that the near future will bring intelligent machines to make life more convenient and comfortable for all of us. Although some may argue otherwise, there is no need to fear artificial intelligence. Like all other machines, AI machines do what human programmers tell them to do. There is, however, a need to understand AI, for it is through understanding that we can make the AI technology most beneficial.

References

    • [1] "About Artificial Intelligence". Internet: http://ai.about.com [1 April 2000].
    • [2] V. Daniel Hunt. Applied Artificial Intelligence: A Source Book. Ed. Stephen J. Andriole and Gerald W. Hoppie. New York: McGraw-Hill, 1992.
    • [3] Charles L. Ortiz Jr. "A Commonsense Language for Reasoning about Causation and Rational Action," Artificial Intelligence Journal , vol. 111(1-2), pp. 73, 1999.
    • [4] Wai K. Yeap and Margaret E. Jefferies, "Computing a Representation of the Local Environment," Artificial Intelligence Journal, vol. 107(1-2), pp. 265, 1999.
    • [5] "Kasparov vs. Deep Blue: The Rematch." IBM Research. Internet: http://www.research.​ibm.com/deepblue/ [1 April 2000].
    • [6] "IHS Jane's International Defense Review." Jane's Defense Digest. IHS. Internet: http://idr.janes.com​/public/idr/index.sh​tml [1 April 2000].