Turing test Could the computer imitate human thinking? Turing suggested a test. A human interrogator at a keyboard asks questions of both the machine and another human. If the interrogator cannot distinguish between the two then the computer's activity counts as thinking.
In 1956 Herbert Simon (Political Science, Nobel Prize in Economics) and Alan Newell developed a system to prove theorems in formal logic. Used ``bounded rationality'' and stock recipes or heuristics to imitate human thinking. This program, the Logic Theorist, proved 38 theorems from Russell and Whitehead's Principia Mathematica, one more elegantly than had Russell and Whitehead.
(Also during 1954-56 work on digital vN computers to simulate neural networks - Minsky and others.)
Newell and Simon were committed to the idea of ``complex information processing'' which aimed to model human thinking. Here the computer program is seen as a test of a particular psychological theory of how a cognitive process might work in humans.
More generally, N&S insisted they were not merely demonstrating machine intelligence but were modelling general laws underlying all thinking - thus the idea of functionally interpreted systems - cognitive processing studied independently of hardware/ neural implementation.
Physical Symbol System hypothesis (Newell and Simon, 1976): The brain and a properly programmed digital computer are both examples of physical symbol systems. Cognitive phenomena consist of manipulation of these symbols, also called ``mental representations'', and construed to be either propositions (Fodor and Pylyshyn) or not - the latter are called ``models'' (Newell). In either case these are functionally interpreted systems.
Others, like John McCarthy and Marvin Minsky preferred the term ``artificial intelligence''. AI seeks to produce on a computer a pattern of output that in human beings would be considered ``intelligent''. It leaves open the possibility that computer intelligence might differ from human intelligence.
MIT Symposium on Information Theory, Sept 10-12, 1956.
Topics: Coding theory (Shannon and others), Automata (Newell and Simon - Logic Theorist's proof of theorem 2.01 in Russell and Whitehead's Principia Mathematica; a computer implementation of Hebb's theory of cell assemblies), Information sources (Chomsky's paper on transformational grammars) and Information users (Miller's paper on the magic no. 7)
The basic theme linking the disciplines was, the mind as an information processing system.
Bruner, Goodnow and Austin's A Study of Thinking was also published in 1956 - See sec. .
Chomsky in 1997 (Johnson and Erneling, 1997 sec. ) comments on the height of interest, at that time and place, in the prospect of a unified science: an element of triumphalism in the political culture, technological euphoria, a sense that barriers to progress were falling. Physics, Chemistry and Biology had been unified through QM, biochemistry and molecular biology. The next great challenge was to understand the human mind.
During 1960-85 the classical AI (also known as physical symbol system) approach dominated, and the main contributions to cognitive science came from AI, linguistics and psychology.
Since 1985 divergent approaches have appeared within AI. Also neuroscience has seen new breakthroughs. In the last 15 years neuroscience and AI in the form of artificial neural networks have played more of a role in CS. Leaving aside the considerable applications-oriented work based on symbolic AI, the field of AI has moved towards adopting biological models based on the brain (Connectionism) and on development and evolution (A-Life).