AI Large Language Model (LLMs) Discussion for Council 9

There’s a change in the world happening due to LLMs like ChatGPT and OpenAI. The world is flooded with hype but little data on what practical steps we can take to make ourselves smarter on the technology.

Questions for Discussion: What can we do to prepare for this change? What are you doing or have seen people do with LLMs today? What will the actual big changes be vs. the hype?

Why You Should Experiment with LLMs

Early adopters of this language will influence the unimaginable machines of the future, and, soon enough, this vocabulary will not be optional. The question is: Will you opt in now, or later? — John Maeda, Talking in Code

Sam Altman, the CEO of OpenAI says, “The future is about to change. “We could have gone off and just built this in our building here for five more years and we would have had something jaw-dropping.” But instead, he’s decided to bring out iterative versions of the software to let people get ready for it. So let’s take this opportunity!

This is an exciting time. It’s like having the first iPhone from Steve Jobs, looking at it and saying, “I’ve got this thing in my hand. Think about what it will be like when everyone has one!” I’d like to have a conversation about what it is to play with this new technology and see how deeply we can look into the future.

What are AI and LLMs?

At it’s base, we built computers to get stuff done. This is the human goal—to create new tools to get more of what we want done. This started with the first tools and knives, and then proceeded through the machines of the Industrial Revolution, and then continued through the computer revolution.

Now we are at a point where we’re creating AI tools that are vastly more powerful than ever before. A lot of this has to do with the way we are training them. Older computer systems require experts to put knowledge into these systems with the goal of making the computer as good as a system. It’s like teaching a high school student a foreign language with a grammar book and vocabulary words. It’s a lot of work and rarely do they get to the level of native speakers.

Recently there was a focus on machine learning to create artificially intelligent systems. These systems are able to take data and “learn.” Most people are surprised to learn that Google’s spell check and translation engines are not powered by human linguists but by machines that have “learned” how to do these things by analyzing statistical data. Essentially the computer, through statistical analysis can look at the input data and convert it into the “right” answer.

What Makes LLMs Different?

We use these systems are very good at coming up with completing one single task whether it’s checking spelling, translation, speech recognition or recommending movies on Netflix. The goal of these systems is to automate “boring” work. But now these tools are being used to do things that we thought were very human. Computers can now make decisions on hiring, write essays, and even tell jokes.

How did this happen? Computer scientists have been looking at Artificial Intelligence and saying, “What if we could try to model the brain and human thinking?” They did it by building a basic model of the brain called a neural network that has “neurons” and “connections” that resemble the brain at a high level. These models were successful at many AI tasks but they didn’t do anything resembling “thinking.” Then the team at OpenAI tried something new. What if we used these neural networks to try to predict the next word, training it on the entire internet? By predicting the next word in text, they could gradually write sentences, paragraphs, and essays.

What was shocking in this process was that by doing this, they didn’t just predict the next word but they seemed to have figured out how language works. As Stephen Wolfram in his summary of the technology, “I think we have to view this as a potentially surprising scientific discovery: that somehow in a neural net like ChatGPT, it’s possible to capture the essence of what human brains manage to do in generating language.”

The Language Interface

Probably the best way to think about LLMs is as a new interface. This sounds a but mundane but interfaces are how we deal with the world. Let’s take a simple example about fighting wars:

  • In the earliest caveman days, the winner was the strongest single warrior.
  • Then we moved to generals who could marshall troops. So it was no longer the strongest individual, but the one who could lead the strongest army.
  • Now, much of warfare is about technology. The army that can leverage technology in the most productive way has the advantage.

It’s a matter of “He who masters the interfaces and communication tools wins.” Once we got past the single warfighter, everything became about the interface and how we communicate with other people and machines.

Human language is the most powerful interface that we have. It’s all about communication and how we get things done. The current structure is:

  • Executive sets strategy and gives it to Product Manager using language structured in PowerPoint
  • Product Manager creates stories and gives it to developers using language in Jira
  • Developer creates computer code and gives it to computers using the language of code
  • Computers execute

All of these steps are really just interfaces between the different steps. The better these interfaces can reduce friction, the more effectively we can reach our objective. LLMs provide a new language interface to computers that allows people to “speak” directly to computers using natural language.

Lessons from the Past

LLMs are just supercharged AI systems. While LLMs are more powerful in many ways, we know how AI systems have been used in the past. This gives us a pretty good idea of how they will be used in the future.

  • Sharp tools: LLMs are powerful tools. Think of them like sharp knives. Used well they can be very productive and useful. Used poorly (either intentionally or through laziness) they can cause a lot of harm.
  • People need to retain responsibility for the AI’s decisions. It’s common for people to use an AI model to make a decision and say, “Don’t blame me, the AI said so.” This is already common in lending. One of the oldest AI decision models is FICO, an AI model on credit worthiness or your likelihood to pay back a loan. For many years the single score was your lend/don’t lend decision from a bank. Now the model needs to explain itself and say why it made that decision. However, it’s still the model making that decision. As we use AI for more things, the judgment of the tool will become increasingly important. While the tools and models may have been created with the best intentions, it’s unclear who’s responsible for the end decision. If you have an AI engine that’s hiring people who is responsible for the hiring decision? Is it the software designer, the business owner who configured the software, or the hiring manager who used the software? Responsibility becomes very diffuse because it’s everyone’s responsibility and no one’s. Unclear ownership of responsibility is one of the biggest problems at large corporations and AI will only make this worse.
  • Focus on the objective: People are convinced that the AI will tell them what they should be doing. However, this is the most clear job of humans–setting the goal. People get confused by the name “Artificial Intelligence.” It seems like it should be able to tell us the objective of what we “should” be doing but it isn’t able to do that. As workers, or even as human beings, we should be fervently protecting this decision right.
  • It looks good but is it good: We used to use language as a good proxy for good content. You could tell phishing emails by the bad grammar from the long-lost prince. Today, LLMs will create good-looking content on anything. This will be even more problematic at work when we start to receive long and well-written missives and essays written by LLMs. Of course, the only way to get through them will be summarization by LLMs.
  • AI will give us what it wants, not what we need: Get ready to be even more mind-controlled by your computer. If you thought your Facebook feed was giving you an echo chamber of information, get ready for a whole new level of hyper-personalization.

How to Think About the Future

  • This is a new interface: Using the model of Google Search, LLMs will change the way that people use computers. With Google Search you put in what you want and Google figures out how to get your answer. With LLMs, we move to an assistant model. Everyone will have an assistant who can help them with their work. Instead of tasking the computer with what you want, LLMs are partners in helping you refine and get to your goal. It’ll be your partner in writing emails, creating presentations, and scouring the web for information.
  • It will change the way we communicate with each other: Think about the way mobile technology has changed the way we work and communicate. At work, we no longer need to be in the office to be online. Personally, we can use text messages to keep in touch and coordinate without setting up a meeting. LLMs do a great job of creating and summarizing data. So we will be typing a few words to get our point across and then using LLMs to generate an email (or a number of emails for us to choose from). Then, we will receive the email and the LLMs will summarize it and tell us what we need to know.
  • Be ready to become a prompt engineer. The new way of programming these systems will be through “prompt engineering.” Writing good prompts will be the newly hyped skill of the next few years, like the HTML writers of the early web. Note that it may not turn out that way. It may be so easy to use, like Google, that being a power user isn’t that important.
  • What do you want to do? LLMs will be able to do a surprising amount of your thinking and tasks for you. This includes not only the boring tasks but also the more exciting and creative ones. It will be tempting to let the LLM do more and more work. However, what will you spend your newfound time doing? Will it be something productive and meaningful or watching more Netflix?

How I’ve Been Experimenting

I’ve been using Bing Chat, which uses GPT-4 as the back end. Here are some examples of the types of chats that I’ve had with the tool to better understand it.

Here are some conversations I’ve had with Bing Chat. This is basically ChatGPT that can search the web.

There’s a lot more I want to do. Here are some ideas:

  • Getting AI to write in my style.
  • Better understand how PlugIns work with LLMs.
  • Understand how I can build simple MVPs with ChatGPT+.
  • Learn about Semantic Kernel (e.g., with this course)

The Big Question for Discussion

How can we better prepare? What have you tried? What have you seen that works? Where’s the hype vs. the reality?

References