After all, lots of nonhuman things use some form of communication, many of which can be pretty sophisticated. Once we start using language as a lone signifier of humanity, we're in a world of trouble. So something that can answer in a good facsimile of human language can beat the test without actually passing it. It's this: The only way to tell whether some other entity is thinking, reasoning, or feeling is to ask it. That test, however, has a bunch of hackable loopholes, including the one that Sydney and the other new search-engine chatbots are leaping through with the speed of a replicant chasing Harrison Ford. Basically, he said, if a computer can indistinguishably imitate a human, it's sentient enough. Would they be slaves? Would they rebel? And maybe most important, if they were smart, how would we tell? The computer scientist Alan Turing came up with a test. "Oh, a dog or a cat is sentient, but not a lobster? Really? What is that line? Who gets to draw that line? There's an epistemological barrier with regards to gathering the evidence."įor at least a century scholars and sci-fi writers have wondered what would happen if machines got smart. ![]() "There is some loose agreement, but it's still a contested term across different disciplines," says David Gunkel, a media studies professor at Northern Illinois University who argues that robots probably deserve some rights. And even if most folks no longer consider animals to be mere preprogrammed automata, we still have trouble agreeing on a definition of what constitutes consciousness. In what proved to be a bummer for most life on Earth, Descartes thought that nonhuman animals were in the second category. René Descartes was working on it when he came up with the "I think, therefore I am" thing - because the follow-up question is, "So what are you, then?"įor Descartes, there were two kinds of entities: persons, with all the rights and responsibilities of sentience, and things, which don't have that. Scientists and philosophers call it the problem of other minds, and it's a doozy. Humans have a hard time telling whether something is conscious. And if we aren't careful, it may well tip over into disinformation and dangerous manipulation. It's a profit-making move designed to leverage our very human tendency to see human traits in nonhuman things. The companies that build them are hoping we'll mistake their conversational deftness and invocations of an inner life for actual selfhood. They're just designed to sound as if they do. The Google and Microsoft bots have no more intelligence than Gmail or Microsoft Word. We aren't talking about Cylons or Commander Data here - self-aware androids with, like us, unalienable rights. Philosophically speaking, there is no there there. They are software, programmed to deploy a model of language to pick a word, and then the next, and the next - with style. ![]() If you prick them, they do not bleed, because they don't have blood, nor a them. The foundational neural networks that run these chatbots have neither dimensions, senses, affections, nor passions. Not just smart - sentient, possessed of personhood. ![]() Roose declared himself "deeply unsettled, even frightened, by this AI's emergent abilities." Thompson called his encounter with Sydney "the most surprising and mind-blowing computer experience of my life." The accompanying headlines made it sound like a scene straight out of "Westworld." The robots, at long last, were finally coming for us. ![]() What was even more interesting was the way the journalists freaked out about the interactions. Account icon An icon in the shape of a person's head and shoulders.
0 Comments
Leave a Reply. |