Monday, July 25, 2005

Lippard on Original vs. Derived Intentionality

Victor: Suppose human beings build a robot that is capable of responding to verbal commands and building internal representations of its environment through cameras and by moving around and manipulating its environment.You would say that its internal representations of objects in its environment have only derived intentionality which comes from human intentionality. Now suppose all human beings cease to exist, while the robot continues to function. The robot is then discovered by some other intelligent alien species with original intentionality. That species learns how the robot works, and infers that it has internal representations which correspond to objects in its environment. What would you say about those internal representations during the time when there are no humans and before the aliens discover it? If the derived intentionality is only from the human intentionality, would you say that there is no representation going on anymore? Or does the derived intentionality survive the extinction of humans?Likewise, what would you say about the representations after the aliens discover it?Do they cause representation to begin anew?On my view, it doesn't matter how the causal structures which cause covariance of the internal structures of the robot in correspondence with objects in its environment originate, that is all there is to representation, and the robot has representations which refer regardless of who else exists. How would you describe these situations?-- Jim Lippard

Jim: My concept of original intentionality requires that it be the intentionality of some conscious thinking subject to whom the objects are represented. We can attribute an as-if intentionality to systems where there is this kind of covariation between inner states and objects in the world, but unless is it recognized by the thinking subject, it is merely as-if intentionality and nor original intentionality.

2 comments:

Giordano Sagredo said...

Let's say the robot has internal states that were designed (using some nice external sensors) to reliably covary with temperature as well as object identity (e.g., rocks, other robots, plants, animals). It meets another robot after walking a bit, and says "There is a big warm rock around the corner, behind the plant".

The other robot goes around the corner for a bit, and comes back saying "No, it is a big warm lion, not a rock. I lifted the plant out of the way and saw it was a lion."

The second robot says, "Oh, thank you: there is a big lion around the corner."

*******

The robots have internal states that were designed to covary with states of the world (in fact they have the function of picking out or referring to things in the world), these internal models can be wrong (i.e., these processes can malfunction), and the internal states can be revised in light of new evidence.

I would say these robots' utterances have semantic properties (i.e., the individual terms have referents) which confer upon their utterances truth values. This would still be the case if all humans were killed, if all non-robots were eradicated.

If you would not want to say their utternances have truth values, what epistemic or semantic properties would you give the utterances? Are their utterances no different than the babblings of a brook?

Anonymous said...

-----------
I would say these robots' utterances have semantic properties (i.e., the individual terms have referents) which confer upon their utterances truth values. This would still be the case if all humans were killed, if all non-robots were eradicated.

If you would not want to say their utternances have truth values, what epistemic or semantic properties would you give the utterances? Are their utterances no different than the babblings of a brook?
-----------

I missed the first part of this conversation, so I'm not really sure how you defined "robot". But if you're talking about a robot in the everyday sense of the word (i.e. a programmed computer) then it CAN NOT have semantics (by the *very definition* of a computer). If your robot was designed to have an artificial brain, then it CAN have semantics. With this, you can answer your own question.