Are we nearing a time when we are going to get to have real, meaningful conversations with artificial intelligence?
Nitasha Tiku got the world wondering just that with her story in the Washington Post about a Google engineer who believes that the company’s LaMDA artificial intelligence might be sentient. Google engineer Blake Lemoine carried out a series of seemingly personal conversations with the artificial intelligence and walked away believing that there was a sort of person behind the messages he was receiving.
Artificial intelligence expert Gary Marcus thinks the idea that artificial intelligence systems are anywhere close to sentience is patently absurd. He wrote on his Substack:
Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.
On Dead Cat, Tom Dotan and I talked to Marcus about artificial intelligence, how tech companies should frame these text generating machines to their users, and the media’s failure to cover speculative technologies skeptically. (In the post we make reference to Marcus’s post Does AI really need a paradigm shift?)
Give it a listen.
Read the automated transcript.
Share this post