clock menu more-arrow no yes mobile

Filed under:

Think AI was impressive last year? Wait until you see what’s coming.

Artificial intelligence experts foresee another year of breakthroughs. Is the world ready?

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022 Jonathan Raa/NurPhoto via Getty Images
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

A few years ago, I’d sometimes find myself needing to answer the question, “Why does Future Perfect, which is supposed to be focused on the world’s most crucial problems, write so much about AI?”

After 2022, though, I don’t often have to answer that one anymore. This was the year AI went from a niche subject to a mainstream one.

In 2022, powerful image generators like Stable Diffusion made it clear that the design and art industry was at risk of mass automation, leading artists to demand answers — which meant that the details of how modern machine learning systems learn and are trained became mainstream questions.

Meta pushed releases of both Blenderbot (which was a flop) and the world-conquering, duplicitous Diplomacy-playing agent Cicero (which wasn’t).

OpenAI ended the year with a bang with the release of ChatGPT, the first AI language model to get widespread uptake with millions of users — and one that could herald the end of the college essay, among other potential implications.

And more is coming — a lot more. On December 31, OpenAI president and co-founder Greg Brockman tweeted the following: “Prediction: 2023 will make 2022 look like a sleepy year for AI advancement & adoption.”

AI moves from hype to reality

One of the defining features of AI progress over the past few years is that it has happened very, very fast. Machine learning researchers often rely on benchmarks to compare models to one another and define the state of the art on a specific task. But often in AI today, a benchmark will barely be created before a model is released that obviates it.

When GPT-2 came out, a lot of work went into characterizing its limitations, most of which were gone in GPT-3. Similar work happened for GPT-3, and ChatGPT has for the most part already outgrown those constraints. ChatGPT, of course, has its own limitations, many of them a product of the reinforcement learning on human feedback, which was employed to fine-tune it to say less objectionable stuff.

But I’d warn people against inferring too much from those limitations; GPT-4 is reportedly going to be released sometime this winter or spring, and by all accounts is even better.

Some artists have taken comfort in the respects in which current art models are very limited, but others have warned (correctly, I think) that the next generation of models won’t be similarly limited.

And while art and text were the big leaps forward in 2022, there are many other areas where machine learning techniques could be on the brink of industry-transforming breakthroughs: music composition, video animation, writing code, translation.

It’s hard to guess which dominoes will fall first, but by the end of this year, I don’t think artists will be alone in grappling with their industry’s sudden automation.

What to look for in 2023

I think it’s healthy for pundits to make some concrete predictions rather than vague ones; that way, you, the reader, can hold us accountable for our accuracy. So here are some specifics.

In 2023, I think we’ll have image models that can depict multiple characters or objects and consistently do more complicated modeling of object interactions (a weakness of current systems). I doubt they’ll be perfect, but I suspect most complaints about the limits of current systems will no longer apply.

I think we’ll have text generators that give better answers than ChatGPT (as judged by human raters) to nearly every question you ask them. That may already be happening — this week, the Information reported that Microsoft, which has a $1 billion stake in OpenAI, is planning to integrate ChatGPT into its beleaguered Bing search engine. Instead of providing links in response to search queries, a language model-powered search engine could simply answer questions.

I think we’ll see much more widespread adoption of coding assistant tools like Copilot, to the point where more than 1 in 10 software engineers will say they use them regularly. (I wouldn’t be surprised if half of software engineers employ such tools habitually, but that would depend on how much the systems end up costing.)

I think the space of AI personal assistants and AI “friends” will take off, with at least three options for such uses that are notably better for user experience in head-to-head comparisons than models like Siri or Alexa that exist today.

Greg Brockman knows a lot more than I do about what OpenAI has under the hood, and I think he also expects faster progress than me, so maybe all of the above is actually too conservative! But those are some concrete ways I think you can expect that AI will change the world in the year ahead — and those changes are not small.

“Yikes”

Elon Musk replied to Brockman’s tweet about AI’s prospects in 2023 with a single word: “Yikes.”

There’s a lot of history here, but I’ll try to give you the quick rundown: Musk read about the potential and the enormous risks of AI in 2014 and 2015 and became convinced that it was one of the biggest challenges of our time:

With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.

Along with other Silicon Valley luminaries like Y Combinator’s Sam Altman, Musk co-founded OpenAI in 2015, ostensibly to make sure that AI development would benefit all of humanity. That’s a complicated mission, to say the least, because how best to make AI go well depends immensely on what exactly you expect to go wrong. Musk said he feared the centralization of power under tech elites; others worry the tech elites will lose control of their own creation.

Though Musk departed OpenAI in 2019, he has kept warning about AI, including the AIs that the company he helped found is building and releasing into the world.

I rarely find common ground with Elon Musk. But that “yikes” is also some of what I felt reading Brockman’s prediction. Warnings from AI experts that “we are creating god” used to be easy to brush off as hype; they aren’t so easy to brush off anymore.

I take great pride in my prediction track record, but I’d love to be wrong about these. I think a slow, sleepy year on the AI front would be good news for humanity. We’d have some time to adapt to the challenges AI poses, study the models we have, and learn about how they work and how they break.

We’d be able to make progress on the challenge of understanding the goals AI systems have and predicting their behavior. And with the hype cooling, we might have time for a more serious conversation about why AI matters so much and how we — a human civilization with a shared stake in this issue — can make it go well.

That’s what I’d love to see. But the easiest way to be wrong at predictions is to predict what you want to see instead of where you see incentives and technological developments pointed. And incentives for AI do not point to a sleepy year.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.