As OpenAI developed GPT-4, the engine behind ChatGPT Plus, both the team and the wider AI community encountered something unexpected. GPT-4 went beyond its impressive language abilities and started showing behaviors eerily reminiscent of human thought patterns. This shift was perplexing; after all, they had only fed the system more data. Why would simply more information lead to such a profound change? It was a question that defied the expectations of a mere data upgrade, suggesting a deeper, more complex relationship between data quantity and AI behavior.
This revelation shifted scientists’ perspectives on the nature of these models. GPT-4, initially crafted to predict text sequences, started to showcase decision-making processes parallel to human cognition. It wasn’t just the human-like output that caught attention; it was the way GPT-4 arrived at these conclusions—exhibiting biases, using heuristics, and shifting thought processes in a manner strikingly similar to the human brain’s operation.
This surprising resemblance between GPT-4’s operations and human cognition raises intriguing questions about the nature of intelligence, both artificial and organic. It challenges long-held beliefs about the capabilities of AI and opens up new possibilities for the future of human-AI interaction. In this exploration, we delve into the specific aspects of GPT-4 that mirror human cognition, examine the implications of these similarities, and contemplate what this means for the future of AI as we advance towards even more sophisticated models like GPT-5.
When ChatGPT was released, Stephen Wolfram, a prominent figure in computational science, made a striking observation that reshaped our understanding of language in the context of artificial intelligence. He pointed out that the real surprise with ChatGPT’s emergence was the revelation that language, contrary to long-standing beliefs about its complexity, might not be as intricate for AI to process as we had previously thought. This insight from Wolfram highlights a significant shift in our perception of language comprehension and its replication in AI systems, challenging traditional notions about the complexities of language and the capabilities of machine learning models like ChatGPT.
I remember reading Wolfram’s line about language (and thought) and thinking, “Really? Does he really propose that robots can think?” But it was about to get a whole lot weirder with GPT-4 the engine behind ChatGPT Plus.
One of the most intriguing aspects of GPT-4’s emergent behaviors is its apparent ability to mimic cognitive biases inherent in the human brain. Biases in decision-making and perception, long thought to be exclusively human traits, are now being observed in the responses generated by GPT-4. This observation blurs the lines between artificial and human intelligence, suggesting that AI can not only replicate but also exhibit patterns of thought and behavior that were once believed to be uniquely human.
Overconfidence bias, a well-known phenomenon in human psychology, is where individuals have excessive confidence in their own answers or abilities, often beyond what is objectively justified. Surprisingly, this bias is not just confined to humans; it has been observed in GPT-4 as well.
This shouldn’t have happened. We assumed that AI, driven by data and algorithms, is always objective and free from the cognitive biases that affect humans. However, research has shown that GPT-4, despite being a machine learning model, can exhibit overconfidence in its responses. This is particularly evident in cases where the model provides highly confident answers, even when its predictions are incorrect or when it ventures into areas outside its training data.
Even weirder is GPT-4’s ability to change how it thinks. In Daniel Kahneman’s “Thinking, Fast and Slow,” he describes two different ways that humans think, calling them System-1 and System-2. This distinction in human cognition categorizes System-1 as fast, instinctive, and emotional, and System-2 as slower, more deliberate, and logical. Remarkably, GPT-4 displays a similar capacity to alternate between these modes of thinking.
GPT-4 demonstrates System-1-like behavior when it rapidly generates responses, leveraging learned patterns from a diverse dataset. This enables the AI to swiftly address direct inquiries and engage in light conversation. In contrast, when faced with more intricate questions or tasks, GPT-4 seems to transition into a System-2 mode, adopting a more contemplative approach that reflects in its methodical and thorough responses.
This cognitive versatility can even be prompted deliberately. By using cues such as “Do this step by step” or “Take a deep breath,” users can guide GPT-4 into a more analytic state, which proves beneficial for tackling mathematical challenges or in-depth analyses.
What’s even more astonishing is that this sophisticated behavior has emerged as a byproduct of scaling up the AI’s data input — a feature that wasn’t explicitly programmed. The emergent nature of this phenomenon underscores a fascinating development in AI: the spontaneous generation of complex cognitive behaviors from the mere expansion of data, a process that remains a mystery to researchers and developers alike.
Looking ahead to GPT-5 and beyond, we can expect advancements that may include a more nuanced understanding of context, potential displays of emotional intelligence, creative problem-solving, and adaptive learning akin to human experience. These models could better grasp ethical considerations, and although it’s speculative, they might edge closer to a form of self-awareness, continuing to reshape the interface between humanity and artificial intelligence.
The way GPT-4 echoes human thought is as surprising as it is insightful. This wasn’t something the developers explicitly aimed for; it just happened, which in itself is fascinating. Seeing an AI stumble upon the same cognitive shortcuts and quirks that our brains take shows us a clear, if unexpected, reflection of ourselves. It’s like discovering that your reflection in the mirror can wink back at you—both startling and thought-provoking. Through GPT-4, we’re not just building smarter machines; we’re inadvertently setting up experiments that reveal the quirks of our own minds. It’s an unintended bonus from the AI world, giving us a new way to poke at questions about how we think and why we think the way we do.
This post took about an hour to compose. I’m learning that writing these posts is a weird mix of reading and writing. Working with ChatGPT I’m composing the articles I want to read. The next step in composing my writing partner is to modify the writing style, I want it to write with less flowery writing and no humor. I want it to be interesting. I’ll have to figure out the right prompt for that. Also, I want to work on a proper workflow. I feel like the ChatGPT 500 word window gives me some artifacts (like too many introductions and conclusions) that I don’t want.