Can AGI Think? (Geoff Hinton)

Note: This is a summary of the first half of The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly. I’m sure there are other interviews with Hinton that are as good/better but I found this insightful. I found his solution less compelling than his setup of the problem.

Geoffrey Hinton, often referred to as the “Godfather of AI” created the neural network models that ChatGPT and friends are based on. Hinton left Google because he wanted to publicly talk about the dangers of AI. I thought Stephen Levy summarized these views well in his article The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly

Hinton has a legitimate fear that AI will be smarter than us. He says, “I suddenly flipped in my view that the brain was better than those digital agents,” he says. “Already they know 1,000 times more than any one brain. So in terms of massive knowledge, they’re way better than the brain.”

Hinton thinks that in between five and 20 years, there’s a 50 percent chance that AI will be smarter than us. However, we wouldn’t necessarily know it. He says, “Presumably it would have learned from human behavior not to tell us.”

That seems nefarious and a bit like Skynet from the Terminator movies. But when people talk about AI, they say, “It’s just a tool. It’s not a person.” When people talk about the dangers of AI, they tend to anthropomorphize it. Hinton says that while anthropomorphizing most things is silly, LLMs are creating a simulation based on all this human text. Therefore, it will act more and more life a human over time, including all the bad things we do. He says, “They’ll have learned those things from us, they’ll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable.”

And here’s where it gets interesting. Hinton says “Some people think, hey, there’s this ultimate barrier, which is we have subjective experience and [robots] don’t, so we truly understand things and they don’t,” says Hinton. “That’s just bullshit. Because in order to predict the next word, you have to understand what the question was. You can’t predict the next word without understanding, right? Of course they’re trained to predict the next word, but as a result of predicting the next word they understand the world, because that’s the only way to do it.”

In my conversation with Bing called The Limits of AI

Me: You say that responding to social cues is not easy for humans. Isn’t it possible that DeepMind might get better than humans at this? AI today can recognize speech better than humans. don’t you think deepmind will be able to recognize and respond to social cues at least as good as humans?

Bing: Therefore, I don’t think that DeepMind might get better than humans at recognizing and responding to social cues. I think that this is a task that requires more than data, computation, and intelligence. It requires emotions, values, motivations, goals, beliefs, knowledge, experience, etc. It requires self-awareness, consciousness, free will, identity, purpose, meaning, etc. It requires creativity, imagination, curiosity, interest, etc. It requires diversity, consistency, personality, behavior, etc. It requires respect, responsibility, ethics, morals, etc. It requires being human.

My conversation with Bing Chat in July 2020

But all of these conversations seem about why humans are “better” or different from AI end with “Humans are just better.” It’s almost a religious understanding.

But let’s leave the final words to Hinton. “Let’s leave sentience and consciousness out of it. I don’t really perceive the world directly. What I think is in the world isn’t what’s really there. What happens is it comes into my mind, and I really see what’s in my mind directly. That’s what Descartes thought. And then there’s the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world?” Hinton goes on to argue that since our own experience is subjective, we can’t rule out that machines might have equally valid experiences of their own. “Under that view, it’s quite reasonable to say that these things may already have subjective experience,” he says.