Posted by: captainfalcon | September 27, 2009


I respond here to MM’s reply to “Doesn’t quite compute:”

I read the CRE mostly as a statement about observability of intelligence – something can act as if intelligent, at least within a limited or predicted scope, while not being intelligent. If one wants to take the experiment to its fullest power, it has to state that actually ALL intelligent behavior could be mimicked by something lacking in intelligence, given sufficient preparation and attention to detail. To put it briefly, “a comprehensive weak AI can exist.”

If accepted, then one must unfortunately grant that merely observing a human does not allow one to confidently conclude that the human is intelligent. This is the worst consequence you have to swallow, but it’s not as damning as it seems.

Presumably you have a unique perspective which allows you to judge yourself intelligent. Thus, if you observe another human, who you know has a brain very similar to yours, acting intelligently, you have pretty good evidence that they are intelligent. On the other hand, we generally don’t consider Notepad to be sentient, so perhaps we should be reluctant to label other programs as sentient as well, even if they perform more intelligently.

1. I think your and my readings of CRE may be compatible. On your reading of CRE it is designed to indicate that a comprehensive weak-AI can exist, which, in turn, is supposed to render this claim plausible:

(WAID) X’s seeming to be intelligent does not (by itself) warrant the conclusion that x is intelligent. [“WAID” for “weak-AI default”]

I agree with that. But I’d say CRE attempts to render the possibility of a comprehensive weak AI / WAID plausible <i>by</i> describing the causal chain resulting in x’s apparently intelligent behavior in such a way that it seems weird to us that the x could be capable of thought. After all, Searle’s explanation of why CRE renders WAID plausible is that “it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing.” Presumably, this is so obvious because the mechanism by which the outputs are generated just doesn’t seem to Searle like the sort of thing that could give rise to understanding.

2. As far as your (half-hearted) defense of WAID, I think you’re right that one can accept it and also acknowledge that there are other minds. WAID is compatible with two other principles, which allow Searle to resist the skeptical hypotheses (a) that there are <i>no</i> minds at all and (b) that there are no <i>other</i> minds. These principles are:

(SK) Introspective evidence can provide warrant for the conclusion that you are intelligent.

And, which you invoke:

(OM) If (i) you are warranted in thinking that you are intelligent, and (ii) you are warranted in thinking that x’s brain is of the same kind as yours, and (iii) x seems to be intelligent then (iv) you are warranted in thinking that x is intelligent. (Do you think Searle accepts a stronger, biconditional, version of OM + SK?)

3. So we can accept WAID and also acknowledge that humans are intelligent creatures. But WAID (supplemented only by SK and OM) has two other unpalatable implications (both of which have been remarked before):

(a) If we encounter aliens who seem intelligent but whose brains aren’t sufficiently like ours we will not be licensed to conclude that they’re intelligent. (Likewise, if animal brains are insufficiently like ours, we aren’t licensed to conclude that e.g. dogs are intelligent. But they seem to have some sort of mental life.) So WAID gives rise to a kind of speciesism.

(b) Humans weren’t warranted in thinking that their fellows were intelligent beings until we had some knowledge of what the brain is like. (Unless, that is, we could conclude that because we looked like each other on the outside we were similar on the inside. But why, absent biological evidence we now, but did not always, possess, should that follow?)

4. My final problem with the WAID + SK + OM axis is that defaulting to the view that only human brains can give rise to intelligent behavior leaves a big mystery: what’s so special about the human brain? Why is this particular configuration of parts <i>sui generis</i>? Other configurations of parts (theoretically) give rise to strikingly similar (perhaps identical) observable behaviors. Why should we decide that they don’t give rise to the same set of unobservable effects, as well?

Update: I notice that I come very close to contradicting myself. Oh well.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: