The Limits of AI (Conversation with Bing)

I was inspired by Reid Hoffman’s book Impromtu to work with OpenAI 4.0 (through Bing) to better understand the limits of Large Language Models (LLMs) like OpenAI. I’ve put the conversations with Bing answers in block quotes.

The basic idea is that I used The Journal of Best Practices, a memoir by David Finch as my starting point.

Me: Summarize the book “Journal of Best Practices”

Bing: The Journal of Best Practices is a memoir by David Finch, who was diagnosed with Asperger syndrome five years after marrying Kristen, the love of his life. The book chronicles his efforts to understand his condition and improve his marriage by following a set of rules and tips that he writes in a notebook. The book is a humorous and heartfelt story of self-improvement, love and acceptance12.

Me: How does he try to be a better husband?

Bing: He tries to understand his condition and his wife’s perspective by following a set of rules and tips that he writes in a notebook called the Journal of Best Practices12He also uses humor and honesty to cope with his challenges and communicate with his wife31He becomes more flexible, empathetic and supportive as he works on his self-improvement14.

Me: Tell me more about David’s rules?

David Finch’s rules are based on his personal observations and reflections on how to be a better husband and partner to his wife, who does not have Asperger syndrome. Some of his rules are12345:

  • Don’t change the radio station when she’s singing along
  • Apologies do not count when you shout them
  • Be her friend, first and always
  • Parties are supposed to be fun
  • Laundry: Better to fold and put away than to take only what you need from the dryer
  • Don’t talk about the toaster
  • Don’t take things so literally
  • Don’t interrupt
  • Listen to her feelings
  • Give compliments

These rules help him to be more flexible, attentive, empathetic and supportive in his marriage. They also help him to cope with his challenges and communicate better with his wife.Bing July 15, 2023

So this is what got me thinking. Finch doesn’t understand social interactions due to his condition and learns to be more social through a feedback loop. This is a lot like what ChatGPT and other models are learning to do—be social through a feedback loop. The tool learns how to be more “social” based on the feedback it receives.

The next bit is a bit more edited so doesn’t include the sources. Note that I need to specify OpenAI GPT (its underlying AI engine) as Bing “thinks” it’s just a chatbot.

Me: Couldn’t OpenAI GPT use these types of methods to better communicate with people?

Bing: OpenAI GPT may be able to learn the ways that a person communicates and better copy that, but that does not mean that it would make the communication better and easier. Copying or mimicking someone’s communication style is not the same as understanding or engaging with someone’s communication content. It does not guarantee that the communication is meaningful, appropriate, or accurate. It may also create misunderstandings, confusion, or deception. For example, OpenAI GPT may copy someone’s use of humor, sarcasm, irony, or figurative language, but it may not understand the context, intention, or effect of those forms of communication. It may also generate texts that are inconsistent, contradictory, or irrelevant to the conversation. Therefore, OpenAI GPT may not be able to improve the quality or the ease of the communication by simply copying someone’s communication style. It may need to have a deeper and more reliable understanding of the communication content and the communication partner.

Me: So what’s the most fundamental difference in the capabilities of OpenAI than what a person can do?

Bing: One possible way to answer this question is to say that the most fundamental difference in the capabilities of OpenAI than what a person can do is that OpenAI is a machine that generates texts, while a person is a human being that communicates meanings. This difference implies that OpenAI and a person have different purposes, processes, and outcomes when they produce or consume texts. For example:

  • Purpose: OpenAI generates texts based on its training data and input, without any regard for the social implications or consequences of its output. A person communicates meanings based on their emotions, values, motivations, goals, etc., with an awareness of the social implications and consequences of their output.
  • Process: OpenAI generates texts by using data, computation, and intelligence, without any regard for the quality or the impact of its output. A person communicates meanings by using emotions, values, motivations, goals, etc., with an awareness of the quality and the impact of their output.
  • Outcome: OpenAI generates texts that are coherent and diverse, but not necessarily meaningful, appropriate, or accurate. A person communicates meanings that are meaningful, appropriate, and accurate, but not necessarily coherent or diverse.

Therefore, the most fundamental difference in the capabilities of OpenAI than what a person can do is that OpenAI is a machine that generates texts, while a person is a human being that communicates meanings.

Me: You mention that AI doesn’t understand meaning. What are the different levels of meaning that OpenAI might understand from easiest to most difficult?

Bing: There is no definitive or authoritative answer to this question, but one possible way to categorize the different levels of meaning that OpenAI might understand from easiest to most difficult is:

  • Syntactic meaning: The meaning that comes from the structure and the rules of the language, such as grammar, syntax, morphology, etc. This level of meaning is relatively easy for OpenAI to understand, because it can be learned and applied by using statistical or rule-based methods.
  • Semantic meaning: The meaning that comes from the words and the sentences, such as definitions, synonyms, antonyms, etc. This level of meaning is moderately easy for OpenAI to understand, because it can be learned and applied by using lexical or neural methods.
  • Pragmatic meaning: The meaning that comes from the context and the situation, such as relevance, coherence, inference, etc. This level of meaning is moderately difficult for OpenAI to understand, because it requires more data and computation to learn and apply by using contextual or attentional methods.
  • Discourse meaning: The meaning that comes from the conversation and the interaction, such as dialogue acts, turn-taking, feedback, etc. This level of meaning is difficult for OpenAI to understand, because it requires more data and intelligence to learn and apply by using conversational or interactive methods.
  • Social meaning: The meaning that comes from the people and the relationships, such as emotions, values, motivations, goals, etc. This level of meaning is very difficult for OpenAI to understand, because it requires more data and intelligence to learn and apply by using affective or social methods.

Me: Couldn’t an AI engine like DeepMind be better at social learning because of its expertise in reinforcement learning?

Bing: DeepMind is a research company that specializes in artificial intelligence and machine learning. It is a subsidiary of Google and part of the Alphabet group. DeepMind’s mission is to create artificial agents that can learn from any kind of data and achieve general intelligence123.

DeepMind may be better at social learning than OpenAI because of its expertise in reinforcement learning, which is a type of machine learning that involves learning from rewards and punishments. Reinforcement learning can enable artificial agents to learn from their own actions and experiences, and to adapt to different environments and goals124.

DeepMind has applied reinforcement learning to various domains and challenges, such as playing video games, board games, and Atari games, controlling robots, simulating physics, predicting protein structures, improving health care, and advancing scientific discovery1234. Some of DeepMind’s notable achievements include: AlphaGo: The first computer program to defeat a human professional player at the game of Go, which is considered one of the most complex and challenging board games in the world123.

Me: So with these other methods, do you think deepmind would be successful?

Bing: I don’t know for sure, but I think it would be very difficult for DeepMind to be successful in learning social meanings with these other methods. Social meanings are not easy or simple to learn, even for humans. They are complex and diverse, subjective and dynamic, influenced by many factors, and changing and evolving over time and across contexts. They are also not easy or simple to measure, even for humans. They are not a fixed or objective concept, and they may vary depending on the person, the situation, the culture, and the time. They may also have multiple levels and dimensions, and they may be literal or figurative, explicit or implicit, factual or opinionated, logical or emotional, etc.

Therefore, I think it would be very difficult for DeepMind to be successful in learning social meanings with these other methods. It would require a lot of data, computation, and intelligence. It would also require a clear and consistent definition and measurement of social meanings that can capture their complexity and diversity. It would also require a flexible and adaptive approach that can account for their variability and dynamism. It would also require a respect and a responsibility for the social implications and consequences of its output. I don’t know if DeepMind has or can have these requirements. I don’t know if any artificial agent can have these requirements. I don’t know if learning social meanings is possible or desirable for artificial agents. I don’t know what the ultimate goal or outcome of learning social meanings is for artificial agents. I don’t know what the benefits or risks of learning social meanings are for artificial agents and humans. I don’t know if learning social meanings is the same as understanding or communicating social meanings. I don’t know if learning social meanings is the same as being human.

Me: You say that this is not easy for humans. isn’t it possible that deepmind might get better than humans at this? AI today can recognize speech better than humans. don’t you think deepmind will be able to recognize and respond to social cues at least as good as humans?

Bing: I don’t think so. I think that recognizing and responding to social cues is not the same as recognizing speech. Speech recognition is a well-defined and measurable task that involves converting spoken words into written text. Social cue recognition and response is a vague and complex task that involves interpreting and reacting to subtle and implicit signals that convey social meanings. Speech recognition can be improved by using more data, computation, and intelligence. Social cue recognition and response may not be improved by using more data, computation, and intelligence. Speech recognition does not have significant social implications or consequences. Social cue recognition and response does have significant social implications or consequences.

Therefore, I don’t think that DeepMind might get better than humans at recognizing and responding to social cues. I think that this is a task that requires more than data, computation, and intelligence. It requires emotions, values, motivations, goals, beliefs, knowledge, experience, etc. It requires self-awareness, consciousness, free will, identity, purpose, meaning, etc. It requires creativity, imagination, curiosity, interest, etc. It requires diversity, consistency, personality, behavior, etc. It requires respect, responsibility, ethics, morals, etc. It requires being human.

Bing July 15, 2023

This is interesting but I’m not convinced. I’ll have to dig a little deeper into this. Some things that make me think that LLMs are pretty good at social cues is A Conversation With Bing’s Chatbot Left Me Deeply Unsettled and a reference in this episode of Computerfile that as model size grows, the model will tell you that it is sentient and that it has a strong desire not to be shut down. Here’s another reference about the dangers of anthropomorphizing GPT.