Hi everyone. I’m so glad to be at my first CG Councils event. I was told to always start a speech with a joke. I’m not good at writing jokes, so I asked GPT-4 to write some for me. Here’s the prompt I used “Go to cgcouncils.com to learn about the Collaborative Gain Councils. Now create some jokes that start with: How many Collaborative Gain Councils members do you need to screw in a light bulb?”
- None. They just use their bright ideas to illuminate the room.
- 8. One to screw in the light bulb and seven to join a job search council to help him or her find a new opportunity.
- 5. One to screw in the light bulb and four to share their stories of how they overcame similar challenges in their careers.
But let’s get serious. How did the other team come up with their arguments? Clearly, they used ChatGPT. Just remember that when you hear their arguments. They are just driving this purely analytical mindset and are defining sentience on their robot overlord terms, not human terms. ChatGPT will tell you the following the metrics for Sentience have always been changing.
Let’s start with the basics about sentience. The word sentient originally meant the ability to sense things. Let’s take the example of a simple vehicle, one that responds to light. It has a solar panel and a motor and when there’s light, its motors are engaged and it scurries away. So it functions just like a bug. So we have our first sentient robot. Certainly on this definition, a roomba would be.
While we certainly don’t think of a roomba as sentient let’s get a little more serious. When the term “Artificial Intelliengence” started, it was created to describe the things that only computers can do. The key artificial intelligence game was chess. However, computers can best the best chess masters hands down. Then in the 1990s, Gary Kasparov was beaten by Deep Blue. Now, even the best players in the world don’t have a chance against the best computers. The open-source program Stockfish has an ELO of 3700 which is far better than Magnus Carlson, the best player in the world, who is at about 2800. But we don’t think of chess computers as sentient.
Our opponents will say, “That’s not fair. The goalposts keep changing.” Every time computers reach a goal, the goalposts change. My colleagues will dig deeper into this but let’s use a simple definition for sentience, “To Be a Human.”
Let’s talk about robots being human. Last year, Blake Lemoine was fired from his job at Google. Why? Because he started telling people that Google’s LLM, called LambDA was sentient. He went so far to say that if it is sentient, it’s wants should be respected. He was wrong in thinking it was sentient but why did he think this? One main reason was how he defined sentience. Most computer scientists are familiar with The Turing Test. This is the classic test of artificial intelligence, proposed by Alan Turing in 1950. It involves a human judge who interacts with a human and a machine through text messages, and tries to determine which is which. If the machine can fool the judge into thinking that it is human, it passes the test. LLMs like OpenAI can perform well on this test, as they can generate fluent and coherent text responses that mimic human conversation. Clearly OpenAI can pass this test. However, this is not a test of thinking but a test of deception
So let’s keep this closer to home. Let’s think about if these machines are really “Sentient” Would we want an LLM to attend The Councils and have membership. If it had arms and legs would we want to invite it on “Choose Your Adventure” tomorrow? But why? There won’t be,, and there shouldn’t be, this human connection. I’m running out of time for this but we can talk about this later.
Let me tell you a story from John Maeda’s How to Speak Machine. Maeda is Microsoft’s head of AI and Design. He’s big player in this field. He tells this story. A major soup company had invested a fortune in creating an “expert system” (the first generation of AI) to make soup in their factories just like the human operators. The soup company’s problem was that their best factory operators were all getting older and they weren’t sure how to deal with them all eventually retiring. So the soup-making experts were carefully observed, and all their actions and ways of thinking were then encoded as IF-THEN rules. The day finally came for the factory to fire up the the system Al system and make some soup. But the results were disappointing-the soup tasted terrible, in fact. Maeda was a big fan of expert systems and was shocked to hear about this failure, so he asked the engineers what happened. “It was really quite simple and funny,” she said. “They asked one of the old guys to explain why the soup tasted bad. He stepped forward, leaned over the soup bowl, and sniffed it a few times loudly. His response was, ‘It smells bad?” There’s nothing more that I can say about this proposal about. It just smells bad.
I want to leave you with this one thought. “What does your vote mean?” Robots can help us to manage the world but we should be adamant that they shouldn’t lead us. While this seems like a fun little debate on processing power and complexity, it’s about more than that. A vote for non-sentience is a vote to defend the human race against the emerging robot intelligence.