The past year was the first in which general-purpose models became economically useful. GPT-3, in particular, demonstrated that large language models have surprising linguistic competence and the ability to perform a wide variety of useful tasks. I expect our models to continue to become more competent, so much so that the best models of 2021 will make the best models of 2020 look dull and simple-minded by comparison. This, in turn, will unlock applications that are difficult to imagine today.
In 2021, language models will start to become aware of the visual world. Text alone can express a great deal of information about the world, but it is incomplete, because we live in a visual world as well. The next generation of models will be capable of editing and generating images in response to text input, and hopefully they’ll understand text better because of the many images they’ve seen.
This ability to process text and images together should make models smarter. Humans are exposed to not only what they read but also what they see and hear. If you can expose models to data similar to those absorbed by humans, they should learn concepts in a way that’s more similar to humans. This is an aspiration — it has yet to be proven — but I’m hopeful that we’ll see something like it in 2021.
And as we make these models smarter, we’ll also make them safer. GPT-3 is broadly competent, but it’s not as reliable as we’d like it to be. We want to give the model a task and get back output that doesn’t need to be checked or edited. At OpenAI, we’ve developed a new method called reinforcement learning from human feedback. It allows human judges to use reinforcement to guide the behavior of a model in ways we want, so we can amplify desirable behaviors and inhibit undesirable behaviors.
GPT-3 and systems like it passively absorb information. They take the data at face value and internalize its correlations, which is a problem any time the training dataset contains examples of behaviors that we don’t want our models to imitate. When using reinforcement learning from human feedback, we compel the language model to exhibit a great variety of behaviors, and human judges provide feedback on whether a given behavior was desirable or undesirable. We’ve found that language models can learn very quickly from such feedback, allowing us to shape their behaviors quickly and precisely using a relatively modest number of human interactions.
By exposing language models to both text and images, and by training them through interactions with a broad set of human judges, we see a path to models that are more powerful but also more trustworthy, and therefore become more useful to a greater number of people. That path offers exciting prospects in the coming year.
Ilya Sutskever is a co-founder of OpenAI, where he is chief scientist.
Ilya Sutskever OpenAI’s co-founder on building multimodal AI models
Published
Reading time
2 min read