Back in 1950, while a physics major at Harvard, I wandered into C. I. Lewis’s epistemology course. There, Lewis was confidently expounding the need for an indubitable Given to ground knowledge, and he was explaining where that ground was to be found. I was so impressed that I immediately switched majors from ungrounded physics to grounded philosophy.
For a decade after that, I hung around Harvard writing my dissertation on ostensible objects -- the last vestige of the indubitable Given. During that time no one at Harvard seemed to have noticed that Wilfrid Sellars had denounced the Myth of the Given, and that he and his colleagues were hard at work, not on a rock solid foundation for knowledge, but on articulating the conceptual structure of our grasp of reality. Sellars’ decision to abandon the old Cartesian problem of indubitable grounding has clearly paid off. While Lewis is now read, if at all, as a dead end, Sellars’ research program is flourishing. John McDowell, for example, has replaced Lewis’ phenomenalist account of perceptual objects with an influential account of perception as giving us direct access to reality.
But, although almost everyone now agrees that knowledge doesn’t require an unshakeable foundation, many questions remain. Can we accept McDowell’s Sellarsian claim that perception is conceptual “all the way out,” thereby denying the more basic perceptual capacities we seem to share with prelinguistic infants and higher animals? More generally, can philosophers successfully describe the conceptual upper floors of the edifice of knowledge while ignoring the embodied coping going on on the ground floor; in effect declaring that human experience is upper stories all the way down?
This evening, I’d like to convince you that we shouldn’t leave the conceptual component of our lives hanging in midair and suggest how philosophers who want to understand knowledge and action can profit from a phenomenological analysis of the nonconceptual embodied coping skills we share with animals and infants. . . .
Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian:
In 1963, I was invited by the RAND Corporation to evaluate the pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive Simulation (CS). Newell andSimon claimed that both digital computers and the human mind could be understood as physical symbol systems, using strings of bits or streams of neuron pulses as symbols representing the external world. Intelligence, they claimed, merely required making the appropriate inferences from these internal representations. As they put it: “A physical symbol system has the necessary and sufficient means for general intelligent action.”
As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a “universal characteristic” – a set of primitives in which all knowledge could be expressed, -- Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.
At the same time, I began to suspect that the critical insights formulated in existentialist armchairs, especially Heidegger’s and Merleau-Ponty’s, were bad news for those working in AI laboratories-- that, by combining rationalism, representationalism, conceptualism, formalism, and logical atomism into a research program, AI researchers had condemned their enterprise to reenact a failure. . . .
No comments:
Post a Comment