Next: Brain-like computational modelling: neural
Up: AI Lecture 2
Previous: AI Lecture 2
  Contents
- Performance is in limited, often highly restricted domains -
consisting of specific problem situations or microworlds.
- The brittleness problem: SHRDLU's performance breaks down when
confronted with an utterance it is not explicitly programmed to
handle. Winograd says this is true of any progam whose knowledge of
the world is represented in terms of explicit, static rules. Human
cognition is flexible, faced with an unsolvable problem, it can come
up with a partial solution. Secondly, humans learn from past
mistakes.
- The AI models for human cognition were ``ludicrously
un(der?)determined'' by the data (Dennett, 1984?). Any number of
widely-variant models could equally well model the data.
- Role of logic: Human cognition does not depend exclusively on
deductive logic. That would make even ordinary tasks impossible.
Heuristics (to overcome combinatorial explosion) is still rule-based.
Human cognition uses mechanisms (similar to Wittgenstein's family
resemblences) like association and content-based retrieval.
- Common-sense knowledge: The countless tiny facts gathered by us
in the course of living.
Hubert Dreyfus What Computers (Still) Cannot Do maintains that
intelligence necessarily requires general commonsense. Eg. Schank's
program using a restaurant script might be able to answer, What
food was ordered? or, How much did the person pay for his
food? But it could not answer What part of the body was used to
eat the food? or How many pairs of chopsticks were required to
eat the food? So much of life requires us to use our commonsense
that we never notice it.
- There's too much of CS knowledge: too ambitious to try to code
it into a computer.
- CS knowledge is tacit, not propositional (eg. riding a bicycle).
- Often procedural, not declarative.
- Derived directly from the richness experiences and so not
obtainable without them.
- Even if it were possible to code CS, the performance of a serial
system would get slower. Humans on the other hand get faster when
able to exploit additional constraints (Rumelhart, PDP).
Later work in symbolic AI attempted to overcome these problems using
more powerful or more efficient architectures within the rule-based
paradigm. (Self-modifying production systems, Case-based reasoning)
The SOAR program (State, Operator And Result) (Newell et. al.).
Problem solving with a different approach - a lot of domain-specific
knowledge, including models and production rules, are in a ``long-term
memory.''
When a current situation in the ``working memory'' matches a rule in
the LTM, the rule is brought into WM. Beginning with some explicit
rules the system ``learns'' new rules and stores them by a
``chunking'' mechanism.
ACT (Adaptive Control of Thought) John Anderson, 1983: A production
system model supported by declarative and procedural knowledge
structures. (Similar to SOAR?) See discusion in Gardner, 1985,
pp. 131-132. See also sec.
CYC Project, Douglas Lenat at MCC Austin, Texas: Estimate of 500,000
commonsense rules now revised to 2 million, to be completed by 2025.
Next: Brain-like computational modelling: neural
Up: AI Lecture 2
Previous: AI Lecture 2
  Contents