sanscardinality wrote: Sorry if I came across as argumentative.
Oops. I mean "argument" as in a persuasive line of reasoning. Did not mean to slander you or accuse you of being, well, vulgar.
sanscardinality wrote: but there's no particular reason to think so any more than there is a reason to think two watches that keep similar time have similar mechanisms.
Well, there's the rub. Is consciousness the mechanism or the outcome? Two watches may have very different mechanisms, but they both keep time. Is one "real" time and the other "pretend" time? And this is my concern: conflating the mechanism and the outcome.
Let me give you an example. The philosopher John Searle has long been a critic of the possibility of AI--in essence he insists that it is, in principle, impossible for machines to think, have consciousness, etc.. Here is one of his arguments, not verbatim, but close: He says that having an algorithm that appears to think (be conscious, self-aware, etc.) is a model of thinking but not "real" thinking. He further gives the example of digestion: a computer model of digestion is just a model, not real digestion.
But I say this is a false analogy. The real analogy would be a plastic-and-glass stomach that adds chemicals to process food and make it suitable for uptake. It might use different chemical processes, but in the end a source of energy and building blocks is broken down to be suitable for adding to the body. It's like SC's watch analogy: there may be different mechanism, but if the food is digested, it's still real.
Look, the only thing
I can judge by is the product. If you have an algorithm that persuasively acts as if it is conscious, self-aware, thinking, then frankly I don't see how someone could argue--sorry, put forward a persuasive line of reasoning--that it is not those things. And to address Athena's point about the Turing test--I acknowledge it won't be simple, and there will be arguments, er, persuasive lines of reasoning. But I am extremely uncomfortable with
a priori assumptions that there is "real" thinking and "pretend" thinking when we don't really understand it. Maybe we will have a good criterion for distinguishing them. But maybe we won't. Right now it's pretty easy to dismiss ELIZA and other programs that trick the unwary into thinking they are talking to a real person, and we are far, far, far, from any artificial consciousness, real or pretend. But I'm willing to bet that if and when we make progress and get closer, it's going to be really hard to make a distinction that isn't simply on the basis of discrimination, e.g., if's only real thoughts if it's done with carbon and not with silicon. In fact, it is exactly
because the issue is of importance to ethics and so on that I am, on principle, unwilling to concede differences between "real" and "pretend" consciousness when none of us have any basis whatsoever to make that distinction. I'm sorry if
I am being argumentative. But none of you have put forward a persuasive line of reasoning towards a distinction between "real" or "pretend" thinking. This is why I agreed with the Dijkstra comment that started this thread. I think (if you believe I am doing "real" thinking"

) that discussions whether or not a submarine "swims" are, for me, as pointless as a discussion if two watches with different mechanisms are both keeping "real" time. Discussing the differences in the mechanisms and how they achieve the same end is fascinating and fruitful. Discussing what consciousness is, and how it arises in biological systems, is fascinating. Speculation about how might consciousness might arise in algorithmic systems, while premature, could also be fascinating. Even discussions of how we might be "fooled" into thinking an algorithmic system is consciousness, as with the Turing Test, is interesting.