First let’s ask a question how do you know that another person understands a particular concept in the same way as you do? You ask about different aspects of the concept and see if the answers correlate with your own answers to the same questions. It is possible that up to a point the answers may correlate well, but the person still has misunderstanding of the concept. If you realize this possibility you ask more questions to eliminate such case. Theoretically it is possible that after any length of communication the person still misunderstand the concept. But this would be highly impossible for long enough conversation. One can say that the assurance of mutual understanding asymptotically approaches to the full confidence (but never reaches it) to any upfront desirable level. In this sense it is valid to say that it is possible that one can be “completely” sure that there is no misunderstanding between communicating parties.
Secondly, having consciousness assumes a specific type of behaviour of a person or animal. When you realize correspondence between your own conscious behaviour and your behaviour perceived by others, you can project this correspondence on other people. Here I do not distinguish behaviour of a person and hir answers to questions. Now just by seeing the behaviour of another person for long enough time you may come to a conclusion that the other person has consciousness similarly as you do.
Finally, since the correlation between the reaction of a real person and the reaction of an imaginary conscious person making conscious decisions takes us to any degree of confidence that the real person has consciousness, it is impossible to create a zombie like mind mimicking conscious behaviour while lacking consciousness. In other words Searly’s weak AI is impossible and cannot exist.
No comments:
Post a Comment