Publication Date

8-18-1994

Technical Report Number

TR94-14

Subjects

Software, Computer Applications, Computing Methodologies

Abstract

The attempt to understand intelligence entails building theories and models of brains and minds, both natural as well as artificial. From the earliest writings of India and Greece, this has been a central problem in philosophy. The advent of the digital computer in the 1950's made this a central concern of computer scientists as well (Turing, 1950). The parallel development of the theory of computation (by John von Neumann, Alan Turing, Emil Post, Alonzo Church, Charles Kleene, Markov and others) provided a new set of tools with which to approach this problem --- through analysis, design, and evaluation of computers and programs that exhibit aspects of intelligent behavior --- such as the ability to recognize and classify patterns; to reason from premises to logical conclusions; and to learn from experience. In their pursuit of artificial intelligence and mind/brain modelling, some wrote programs that they executed on serial stored---program computers (e.g., Newell, Shaw and Simon, 1963; Feigenbaum, 1963); Others had more parallel, brain---like networks of processors (reminiscent of today's connectionist networks) in mind and wrote more or less precise specifications of what such a realization of their programs might look like (e.g., Rashevsky, 1960; McCulloch and Pitts, 1943; Selfridge and Neisser, 1963; Uhr and Vossler, 1963); and a few took the middle ground (Uhr, 1973; Holland, 1975; Minsky, 1963; Arbib, 1972; Grossberg, 1982; Klir, 1985). It is often suggested that two major approaches have emerged --- symbolic artificial intelligence (SAI) and (numeric) artificial neural networks (NANN or connectionist networks) and some (Norman, 1986; Schneider, 1987) have even suggested that they are fundamentally and perhaps irreconcilably different. Indeed it is this apparent dichotomy between the two apparently disparate approaches to modelling cognition and engineering intelligent systems that is responsible for the current interest in computational architectures for integrating neural and symbolic processes. This topic is the focus of several recent books (Honavar and Uhr, 1994a; Goonatilake and Khebbal, 1994; Levine and Aparicioiv, 1994; Sun and Bookman, 1994). This raises some important questions: What exactly are symbolic processes? What do they have to do with SAI? What exactly are neural processes? What do they have to do with NANN? What (if anything) do SAI and NANN have in common? How (if at all) do they differ? What exactly are computational architectures? Do SAI and NANN paradigms need to be integrated? Assuming that the answer to the last question is yes, what are some possible ways one can go about designing computational architectures for this task? This chapter is an attempt to explore some of these fundamental questions in some detail. This chapter argues that the dichotomy between SAI and NANN is more perceived than real. So our problems lie first in dispelling misinformed and wrong notions, and second (perhaps more difficult) in developing systems that take advantage of both paradigms to build useful theories and models of minds/brains on the one hand, and robust, versatile and adaptive intelligent systems on the other. The first of these problems is best addressed by a critical examination of the popular conceptions of SAI and NANN systems along with their philosophical and theoretical foundations as well as their practical implementations; and the second by a judicious theoretical and experimental exploration of the rich and interesting space of designs for intelligent systems that integrate concepts, constructs, techniques and technologies drawn from not only SAI (Ginsberg, 1993; Winston, 1992) and NANN (McClelland, Rumelhart et al., 1986; Kung, 1993; Haykin, 1994; Zeidenberg, 1989), but also other related paradigms such as statistical and syntactic pattern recognition (Duda and Hart, 1973; Fukunaga, 1990; Fu, 1982; Miclet, 1986), control theory (Narendra and Annaswamy, 1989) systems theory (Klir, 1969), genetic algorithms (Holland, 1975; Goldberg, 1989; Michalewicz, 1992) and evolutionary programming (Koza, 1992). Exploration of such designs should cover a broad range of problems in perception, knowledge representation and inference, robotics, language, and learning, and ultimately, integrated systems that display what might be considered human---like general intelligence.

Share

COinS