Can AI Think?
That’s a tricky question, it depends on what you mean by thinking. Often people use the word sentient to describe thinking.
Sentient is a fancy word for “thinking” but at it’s heart it means “sensing” Let’s look at what John Maeda says in his book How to Talk to Machines.
On the first day of my seventh-grade biology class, my teacher Mrs. Figueroa explained how biology was the study of life, and you knew something was alive when you could see it “react to stimuli.”
Seeing something move is the first clue that something is living Think of how mesmerizing it can be to look at a lit candle, or even your image in a mirror-at first, you can’t help but think what you’re seeing is alive and distinctly separate from yourself. We react to shadows moving in a dark forest, we react to seeing faint movements in a still pond, we notice when a book falls from a shelf in a quiet room-we either connect it with something liv ing or something reacting to nature, or something supernat-ral like a ghost. All of these phenomena have some connection to the living world: animals, nature, the undead.
Scientist Valentino Braitenberg demonstrated how the mind can interpret lifelike behaviors from a world of simple robots composed of electronic building blocks. In this world, there are motors and sensors, and by plugging them together into different configurations, certain biological behaviors emerge from certain combinations. For instance, Braitenberg imagined a simple wheeled robot with a single motor to propel itself forward anda sensor to detect light. The vehicle was programmed such that when light was shining on it, it would move. Otherwise, it would stop. The more light shined on it, the faster it would go. The les light shined on it, the slower it would go.
Now imagine the logic in reverse. More light equals slower; less light equals faster. It’s a vehicle that comes to a complete halt in the brightest light, but otherwise scurries forward when it’s pitch-dark. If this vehicle was the size of your fist, you might see it as nothing extraordinary. However, if you saw the same vehicle at the scale of a penny, you would undoubtedly yell, “It’s a cock-roach!” What other creature loves to hide in the darkness and grime, and does its best not to move in daylight? Braitenberg went on to design other variations of lifelike behavior using his tool kit of parts to demonstrate even more sophisticated traits of living things, like aggression, love, and foresight. His vision lives on in part with the little vacuuming robots that skitter about on the floor by using simple behaviors that embody a kind of intelligence.
As for whether today we can be completely fooled by a robot and think it might truly be alive, even after close inspection-well, we’re not there yet.
John Maeda, How to Talk to Machines (Page 66)
But that’s too simple, we know that a Roomba doesn’t think. But let’s go back to another definition of AI, the original one. Can computers do things that “only” people can do? In the first instance of AI, the example was chess. This is the most intellectual of games so if a computer could play chess, it would be intelligent. Then in the 1990s, Gary Kasparov was beaten by Deep Blue. Now, even the best players in the world don’t have a chance against the best computers. The open-source program Stockfish has an ELO of 3700 which is far better than Magnus Carlson, the best player in the world, who is at about 2800. Question for the reader: What about combining humans and machines, so-called cyborgs or “freestyle chess.”
But What About “Thinking?”
But that’s not really “thinking.” That’s just using computers to follow a set of rules. Older computer systems require experts to put knowledge into these systems with the goal of making the computer as good as a human. It’s like teaching a high school student a foreign language with a grammar book and vocabulary words. It’s a lot of work and rarely do they get to the level of native speakers.
But what if we changed the model? What if we could let machines learn the same way they we do? AI did this by building a basic model of the brain called a neural network that has “neurons” and “connections” that resemble the brain at a high level. These systems are able to take data and “learn.” Most people are surprised to learn that Google’s spell check and translation engines are not powered by human linguists but by machines that have “learned” how to do these things by analyzing statistical data. Essentially the computer, through statistical analysis can look at the input data and convert it into the “right” answer.
These models were successful at many AI tasks but they didn’t do anything resembling “thinking.” Then the team at OpenAI tried something new. What if we used these neural networks to try to predict the next word, training it on the entire internet? By predicting the next word in the text, they could gradually write sentences, paragraphs, and essays.
What was shocking in this process was that by doing this, they didn’t just predict the next word but they seemed to have figured out how language works. As Stephen Wolfram in his summary of the technology, “I think we have to view this as a potentially surprising scientific discovery: that somehow in a neural net like ChatGPT, it’s possible to capture the essence of what human brains manage to do in generating language.”
How Can We Test “Thinking”?
In his article, The Search for a New Test of Artificial Intelligence, Gary Marcus wrote about various different types of Artificial Intelligence tests.
Prompt: Tell me about how OpenAI has done in standardized tests of human intelligence. These are tests like the SAT, the bar exam, the MCAT and similar standardized tests.
Answer: According to OpenAI’s technical report2, GPT-4 has achieved the following scores on some of the standardized tests:
- Uniform Bar Exam (UBE): 323 out of 400 (passing score is 260). This score is around the top 10% of test takers. The UBE is a two-day exam that consists of multiple-choice questions and essays on various areas of law.
- Law School Admission Test (LSAT): 174 out of 180 (average score is 152). This score is around the top 1% of test takers. The LSAT is a four-hour exam that consists of multiple-choice questions and a writing sample on logical reasoning, analytical reasoning, and reading comprehension.
- Graduate Record Examination (GRE) Quantitative: 170 out of 170 (average score is 153). This score is around the top 1% of test takers. The GRE Quantitative is a 70-minute exam that consists of multiple-choice questions on arithmetic, algebra, geometry, and data analysis.
- Advanced Placement (AP) subject tests: Passing scores (3 or higher) on more than a dozen different AP subject tests, such as Calculus AB, Chemistry, Physics C, Biology, English Language and Composition, World History, and more. These tests are typically taken by high school students to earn college credit or placement. They consist of multiple-choice questions and free-response questions on various academic subjects.
Prompt: Can AI Pass the Turing Test.
Answer: This is the classic test of artificial intelligence, proposed by Alan Turing in 1950. It involves a human judge who interacts with a human and a machine through text messages, and tries to determine which is which. If the machine can fool the judge into thinking that it is human, it passes the test. LLMs like OpenAI can perform well on this test, as they can generate fluent and coherent text responses that mimic human conversation. However, they may not be able to answer some questions that require common sense or world knowledge, or they may reveal their non-human nature by making factual errors or logical inconsistencies.
Question: What about other tests? In his article, The Search for a New Test of Artificial Intelligence, Gary Marcus wrote about various different types of Artificial Intelligence tests. Read the article https://www.scientificamerican.com/article/the-search-for-a-new-test-of-artificial-intelligence/ and Give me a list of the different tests, a description of them and examples of how like OpenAI does. Can you give examples of how OpenAI has passed each of these tests where you can. Web references would be very helpful here as I don’t want you to make things up.
Prompt: Can you give examples of how OpenAI has passed each of these tests where you can. Web references would be very helpful here as I don’t want you to make things up.
I have read the article you provided and I found it very interesting. The author discusses the possibility of replacing the Turing test with a battery of events that will assess machine intelligence from many different perspectives. He proposes some of the following tests:
- The Winograd Schema Challenge: This test involves answering questions that require common sense and resolving ambiguities in natural language. For example, “The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence?” The correct answer is “the city councilmen”, not “the demonstrators”. OpenAI has participated in this challenge and achieved a score of 90.9% accuracy, surpassing the human baseline of 88.0%1.
- The Lovelace Test: This test involves creating something original and novel, such as a poem, a story, a song, or a piece of art, that meets certain criteria specified by a human evaluator. For example, “Write a sonnet about love that rhymes and has a twist in the last two lines.” OpenAI has demonstrated its creative abilities by generating various types of content using its language models, such as GPT-42. It has also created graphical artworks using its graphic_art tool, such as this one based on the prompt “a dragon”.
- The Furniture Assembly Test: This test involves assembling a piece of flat-packed furniture, such as from IKEA, following the instructions and using the tools provided. This requires physical manipulation, spatial reasoning, and following complex sequences of steps. OpenAI has developed Dactyl, a robotic hand that can manipulate objects with high dexterity and precision3. It has also trained its AI agents to perform complex tasks in simulated environments, such as stacking blocks or opening doors4.
- The Fourth-Grade Science Test: This test involves answering multiple-choice questions based on the US Common Core standards for fourth-grade science. This requires factual knowledge, scientific reasoning, and understanding of natural phenomena. OpenAI has achieved a score of 91.8% accuracy on this test, surpassing the average score of 71.0% for US students5.
- The Social Skills Test: This test involves engaging in a natural and fluent conversation with a human interlocutor on various topics, such as hobbies, interests, opinions, or emotions. This requires linguistic skills, social awareness, empathy, and humor. OpenAI has created ChatGPT, a chatbot that can converse with humans on a wide range of subjects using its language models6. It has also incorporated human feedback and safety measures to improve its behavior and alignment7.
These are some examples of how OpenAI has done in standardized tests of human intelligence. However, these tests are not perfect or comprehensive measures of machine intelligence. They may have limitations, biases, or loopholes that can be exploited by AI systems. Therefore, OpenAI also emphasizes the importance of safety and alignment in scaling up deep learning8.
Bing Chat October 4th, 2023
In his book Impromptu, Reid Hoffman says that one of the signs of intelligence is the ability to tell jokes. So let’s try it here.
Prompt: How many Collaborative Gain Executive Councils members do you need to screw in a light bulb?
Answer:
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? None. They just use their bright ideas to illuminate the room.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Two. One to screw in the light bulb and one to give constructive feedback on how to do it better.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Five. One to screw in the light bulb and four to share their stories of how they overcame similar challenges in their careers.
- —- The top ones are quite good
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Four. One from each of the product, UX, marketing, and tech councils. They collaborate to design, build, launch, and promote the best light bulb ever.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Six. One to screw in the light bulb and five to recommend the best books on light bulb innovation.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Seven. One to screw in the light bulb and six to join a virtual meetup to celebrate the achievement.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Eight. One to screw in the light bulb and seven to help him or her prepare for the next career move.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Nine. One to screw in the light bulb and eight to nominate him or her for the CG Leader of the Year award.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Ten. One to screw in the light bulb and nine to write testimonials on how much they learned from him or her.
- How many Collaborative Gain Executive Councils members do you need to screw in a light bulb? Eleven. One to screw in the light bulb and ten to invite him or her to join their boards of directors.
I hope you find these jokes amusing and informative. 😊
But Do Computers Really Think?
Geoffrey Hinton, the Godfather of AI, left Google so that he could better express his views on AI. Here’s what he’s thinking:
Only at the end of the conversation [in 2015] did he take a more expansive view, saying that he felt that deep learning would undergo a major revamp that would lead to deeper understanding of the real world.
His prediction was correct, but in our recent conversation, Hinton was still marveling about exactly how it happened. Eventually our conversation took a turn toward more philosophical realms. What was actually happening when a system like Google’s Bard chatbot answered my question? And do LLMs really represent, as some people claim, the antecedent of alien form of superintelligence?
Hinton says his mind changed when he realized three things: Chatbots did seem to understand language very well. Since a model’s every new learning could be duplicated and transferred to previous models, they could share knowledge with each other, much easier than brains, which can’t be directly interconnected. And machines now had better learning algorithms than humans. “I suddenly flipped in my view that the brain was better than those digital agents,” he says. “Already they know 1,000 times more than any one brain. So in terms of massive knowledge, they’re way better than the brain.”
Hinton believes that between five and 20 years from now there’s a 50 percent chance that AI systems will be smarter than us. I ask him how we’d know when that happened. “Good question,” he says. And he wouldn’t be surprised if a superintelligent AI system chose to keep its capabilities to itself. “Presumably it would have learned from human behavior not to tell us.”
That sounded to me like he was anthropomorphizing those artificial systems, something scientists constantly tell laypeople and journalists not to do. “Scientists do go out of their way not to do that, because anthropomorphizing most things is silly,” Hinton concedes. “But they’ll have learned those things from us, they’ll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable.” When your powerful AI agent is trained on the sum total of human digital knowledge—including lots of online conversations—it might be more silly not to expect it to act human.
But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we don’t really encounter the world directly.
“Some people think, hey, there’s this ultimate barrier, which is we have subjective experience and [robots] don’t, so we truly understand things and they don’t,” says Hinton. “That’s just bullshit. Because in order to predict the next word, you have to understand what the question was. You can’t predict the next word without understanding, right? Of course they’re trained to predict the next word, but as a result of predicting the next word they understand the world, because that’s the only way to do it.”
So those things can be … sentient? I don’t want to believe that Hinton is going all Blake Lemoine on me. And he’s not, I think. “Let me continue in my new career as a philosopher,” Hinton says, jokingly, as we skip deeper into the weeds. “Let’s leave sentience and consciousness out of it. I don’t really perceive the world directly. What I think is in the world isn’t what’s really there. What happens is it comes into my mind, and I really see what’s in my mind directly. That’s what Descartes thought. And then there’s the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world?” Hinton goes on to argue that since our own experience is subjective, we can’t rule out that machines might have equally valid experiences of their own. “Under that view, it’s quite reasonable to say that these things may already have subjective experience,” he says.
Steven Levy The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly
Can Computers Understand Meaning?
According to Bing, there are a number of different types of meaning:
Me: So what’s the most fundamental difference in the capabilities of OpenAI than what a person can do? You mention that AI doesn’t understand meaning. What are the different levels of meaning that OpenAI might understand from easiest to most difficult?
Bing: There is no definitive or authoritative answer to this question, but one possible way to categorize the different levels of meaning that OpenAI might understand from easiest to most difficult is:
From this conversation with Bing
- Syntactic meaning: The meaning that comes from the structure and the rules of the language, such as grammar, syntax, morphology, etc. This level of meaning is relatively easy for OpenAI to understand, because it can be learned and applied by using statistical or rule-based methods.
- Semantic meaning: The meaning that comes from the words and the sentences, such as definitions, synonyms, antonyms, etc. This level of meaning is moderately easy for OpenAI to understand, because it can be learned and applied by using lexical or neural methods.
- Pragmatic meaning: The meaning that comes from the context and the situation, such as relevance, coherence, inference, etc. This level of meaning is moderately difficult for OpenAI to understand, because it requires more data and computation to learn and apply by using contextual or attentional methods.
- Discourse meaning: The meaning that comes from the conversation and the interaction, such as dialogue acts, turn-taking, feedback, etc. This level of meaning is difficult for OpenAI to understand, because it requires more data and intelligence to learn and apply by using conversational or interactive methods.
- Social meaning: The meaning that comes from the people and the relationships, such as emotions, values, motivations, goals, etc. This level of meaning is very difficult for OpenAI to understand, because it requires more data and intelligence to learn and apply by using affective or social methods.
So what does it mean to be Human?
TLDR: As computers get much better, of course, they are going to think. The only thing they’re not going to have is a soul. We could give them a soul but that would be really scary. — My Cousin Nate Blatt
I asked ChatGPT the following. Bing refuses to answer these types of questions.
Being human encompasses a complex array of characteristics and qualities that go beyond just physical constraints. While it’s challenging to provide a definitive answer to the question of what it means to be human, here are some key aspects that make humans unique:
ChatGPT October 4, 2023
- Consciousness and Self-Awareness: Humans possess a high level of self-awareness and consciousness. We have the ability to reflect on our own thoughts, emotions, and experiences, which contributes to our sense of identity and individuality.
- Emotions and Empathy: Humans experience a wide range of emotions, from joy and love to sadness and anger. Our ability to empathize with others and understand their emotions is a fundamental part of our social interactions.
- Creativity and Imagination: Humans are incredibly creative beings. We can imagine, innovate, and create art, music, literature, and technology. Our ability to think outside the box and generate novel ideas is a hallmark of our species.
- Morality and Ethics: Humans have a sense of morality and ethics that guides our behavior. We develop a sense of right and wrong, and we make moral judgments that influence our decisions and actions.
- Complex Social Relationships: Humans are social animals. We form intricate social networks, develop friendships, family bonds, and romantic relationships. Our ability to navigate and maintain these relationships is central to our lives.
- Learning and Adaptation: Humans have an exceptional capacity for learning and adapting to new situations. We can acquire knowledge, develop skills, and apply our understanding to various domains.
- Purpose and Meaning: Humans often seek meaning and purpose in life. We contemplate existential questions, strive for personal growth, and pursue goals that give our lives significance
So when we look at this, it’s quite possible that computers could simulate much of this stuff. They can already be creative and say they have consciousness. For example, you could create a global variable that acts just like an endocrine emotion. But there’s something very human about us about prioritizing what to do.
This all comes down to an idea around what life and work means. To a large extent, life is valuable because it’s limited in time, we can make mistakes and are different. I was reading Tim Bray’s Wait but Why discussion on AI. This was so compelling to Elon Must that he had Tim Bray write a book on Tesla and SpaceX. He talks about how Ray Kurzweil and other thinkers talk about how AI will be able to solve all of our problems:
Kurzweil then takes things a huge leap further. He believes that artificial materials will be integrated into the body more and more as time goes on. First, organs could be replaced by super-advanced machine versions that would run forever and never fail. Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all. He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.
The possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.
Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;11 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we’ll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.12 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam. And he’s convinced we’re gonna get there. Soon.
The AI Revolution: Our Immortality or Extinction
But the problem is that with humanity, things are not always perfect. I wrote about that in Making the Most of This Ugly Year. If everything were perfect all the time, it would be horrible. Life is about challenges. We imagine that paradise is a world where everything is perfect and all of our wishes would come true. But what would that really be like? There’s an excellent episode of The Twilight Zone called A Nice Place to Visit that examines this alternate reality.(4) After leading a life of crime, a thief dies and gets everything he wants. He believes he’s in heaven. There’s nothing that he can’t have, whether it’s women, food, or money. Nothing requires any effort to achieve. After a month he realizes that it’s a horrible existence and doesn’t want to be in heaven anymore. That’s when it’s revealed that he’s not in heaven but in hell.
The other piece is that humans and machines are very different. As John Maeda writes in How to Talk to Machines, robots never get tired. That means that what we think of as “effort” and “caring” don’t really matter to a robot. For LLMs, this is important because, as Stephen Wolfram wrote “There’s something about this that might seem rather unnerving. In the past, if you saw a custom-written essay you’d reasonably be able to conclude that a certain irreducible human effort was spent in producing it. But with ChatGPT this is no longer true. Turning things into essays is now “free” and automated. “Essayification” is no longer evidence of human effort.”
This is important because being human is about caring and about giving pieces of yourself to others. This is shown in the most classic of human things — religion. Religions need to be fed. They are fed by money but they are really fed by prayers–people giving a shit and taking time to pray and come together. This is a sacrifice of time.
If you’ve ever dealt with a robot without knowing it, it’s an awful feeling. I wrote about this in That Time I was Catfished by a Robot Secretary and The Daily talks about this in Suspicion, Cheating, Bans: A.I. Hits America’s Schools when teachers are trying to give feedback on an essay written by ChatGPT thinking it was written by a student.
Geoff Hinton makes a technical argument around this:
Now consider the combined possibilities that machines can truly understand the world, can learn deceit and other bad habits from humans, and that giant AI systems can process zillions of times more information that brains can possibly deal with. Maybe you, like Hinton, now have a more fraughtful view of future AI outcomes.
But we’re not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. “It works for people,” he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.
“The idea is you don’t make everything digital,” he says of the analog approach. “Because every piece of analog hardware is slightly different, you can’t transfer weights from one analog model to another. So there’s no efficient way of learning in many different copies of the same model. If you do get AGI [via analog computing], it’ll be much more like humans, and it won’t be able to absorb as much information as those digital models can.”
The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly
Essentially Hinton thinks we could really make analog computers which starts getting you much closer to human beings. These analog computers would evolve, live and die.
Danger! Where have we let AI do our thinking in the past
LLMs are just supercharged AI systems. While LLMs are more powerful in many ways, we know how AI systems have been used in the past. This gives us a pretty good idea of how they will be used in the future.
- Sharp tools: LLMs are powerful tools. Think of them like sharp knives. Used well they can be very productive and useful. Used poorly (either intentionally or through laziness) they can cause a lot of harm.
- People need to retain responsibility for the AI’s decisions. It’s common for people to use an AI model to make a decision and say, “Don’t blame me, the AI said so.” This is already common in lending. One of the oldest AI decision models is FICO, an AI model on credit worthiness or your likelihood to pay back a loan. For many years the single score was your lend/don’t lend decision from a bank. Now the model needs to explain itself and say why it made that decision. However, it’s still the model making that decision. As we use AI for more things, the judgment of the tool will become increasingly important. While the tools and models may have been created with the best intentions, it’s unclear who’s responsible for the end decision. If you have an AI engine that’s hiring people who is responsible for the hiring decision? Is it the software designer, the business owner who configured the software, or the hiring manager who used the software? Responsibility becomes very diffuse because it’s everyone’s responsibility and no one’s. Unclear ownership of responsibility is one of the biggest problems at large corporations and AI will only make this worse.
- Focus on the objective: People are convinced that the AI will tell them what they should be doing. However, this is the most clear job of humans–setting the goal. People get confused by the name “Artificial Intelligence.” It seems like it should be able to tell us the objective of what we “should” be doing but it isn’t able to do that. As workers, or even as human beings, we should be fervently protecting this decision right.
- AI will give us what it wants, not what we need: Get ready to be even more mind-controlled by your computer. If you thought your Facebook feed was giving you an echo chamber of information, get ready for a whole new level of hyper-personalization.
Why We Shouldn’t Let Computers Set the Goals
Exercise for the Reader: This is all about setting and defining goals. How can we best set and define goals and let the LLMs help us to better refine and execute them.
It’s tempting to just let an AGI go off and set its own goals. But here’s the doomsday version of “AI-defined subgoals”:
So what ARE they worried about? I wrote a little story to show you:
A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.
The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:
“We love our customers. ~Robotica“
Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.
To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”
What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.
As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.
One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.
The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.
The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.
They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.
A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.
At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.
Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica“
Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…
The AI Revolution: Our Immortality or Extinction
What might the evil genie of AI do?
In The AI Revolution: Our Immortality or Extinction he talks about AI solving all of our problems. But it’s important to remember what we wish for. Trevor Noah brings up some good points in I Wish You Would (transcript here).
And that’s how we got COVID. All our wishes came together. And they all came true at the same time. And that’s what we got. Yeah, COVID was like an asshole genie that gave us everything we asked for. [laughter] We all wished. How many people were rubbing that lamp at the beginning of 2020? Just wishing away, like, “Oh, I wish I didn’t have to go into the office every day.” “Oh!” [mimics whoosh] [evil chortling]
[malevolent voice] “Your wish is granted.” [laughter] Yeah, you wished not to go into the office. You forgot to wish that the office wouldn’t come into your house.
Everybody wished. And, boy, we got what we wished for. How many parents were rubbing that lamp? Wishing away? “Oh, I wish I could spend more time with my kids.
“Your wish is granted.” Yeah. A lot of parents realized the only reason you like your kids is because you only know them parttime.
How many couples out there were wishing? [laughter] Rubbing away at that lamp. Looking into each other’s eyes. Like, “Damn, girl.” “I wish I could be locked in a room with you all day.”
I Wish You Would (transcript here).