Corey Olsen / The Atlantic

Of God and Machines

The future of artificial intelligence is neither utopian nor dystopian—it’s something much more interesting.

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Sign up for it here.      

Miracles can be perplexing at first, and artificial intelligence is a very new miracle. “We’re creating God,” the former Google Chief Business Officer Mo Gawdat recently told an interviewer. “We’re summoning the demon,” Elon Musk said a few years ago, in a talk at MIT. In Silicon Valley, good and evil can look much alike, but on the matter of artificial intelligence, the distinction hardly matters. Either way, an encounter with the superhuman is at hand.

Early artificial intelligence was simple: Computers that played checkers or chess, or that could figure out how to shop for groceries. But over the past few years, machine learning—the practice of teaching computers to adapt without explicit instructions—has made staggering advances in the subfield of Natural Language Processing, once every year or so. Even so, the full brunt of the technology has not arrived yet. You might hear about chatbots whose speech is indistinguishable from humans’, or about documentary makers re-creating the voice of Anthony Bourdain, or about robots that can compose op-eds. But you probably don’t use NLP in your everyday life.

Or rather: If you are using NLP in your everyday life, you might not always know. Unlike search or social media, whose arrivals the general public encountered and discussed and had opinions about, artificial intelligence remains esoteric—every bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view.

Science fiction, and our own imagination, add to the confusion. We just can’t help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runner—people-machines that remain pure fantasy. Then there’s the distortion of Silicon Valley hype, the general fake-it-’til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process “artificial intelligence.” And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.

But the confusion surrounding the miracles of AI doesn’t mean that the miracles aren’t happening. It just means that they won’t look how anybody has imagined them. Arthur C. Clarke famously said that “technology sufficiently advanced is indistinguishable from magic.” Magic is coming, and it’s coming for all of us.


All technology is, in a sense, sorcery. A stone-chiseled ax is superhuman. No arithmetical genius can compete with a pocket calculator. Even the biggest music fan you know probably can’t beat Shazam.

But the sorcery of artificial intelligence is different. When you develop a drug, or a new material, you may not understand exactly how it works, but you can isolate what substances you are dealing with, and you can test their effects. Nobody knows the cause-and-effect structure of NLP. That’s not a fault of the technology or the engineers. It’s inherent to the abyss of deep learning.

I recently started fooling around with Sudowrite, a tool that uses the GPT-3 deep-learning language model to compose predictive text, but at a much more advanced scale than what you might find on your phone or laptop. Quickly, I figured out that I could copy-paste a passage by any writer into the program’s input window and the program would continue writing, sensibly and lyrically. I tried Kafka. I tried Shakespeare. I tried some Romantic poets. The machine could write like any of them. In many cases, I could not distinguish between a computer-generated text and an authorial one.

A quotation from this story, as interpreted and summarized by OpenAI.

I was delighted at first, and then I was deflated. I was once a professor of Shakespeare; I had dedicated quite a chunk of my life to studying literary history. My knowledge of style and my ability to mimic it had been hard-earned. Now a computer could do all that, instantly and much better.

A few weeks later, I woke up in the middle of the night with a realization: I had never seen the program use anachronistic words. I left my wife in bed and went to check some of the texts I’d generated against a few cursory etymologies. My bleary-minded hunch was true: If you asked GPT-3 to continue, say, a Wordsworth poem, the computer’s vocabulary would never be one moment before or after appropriate usage for the poem’s era. This is a skill that no scholar alive has mastered. This computer program was, somehow, expert in hermeneutics: interpretation through grammatical construction and historical context, the struggle to elucidate the nexus of meaning in time.

The details of how this could be are utterly opaque. NLP programs operate based on what technologists call “parameters”: pieces of information that are derived from enormous data sets of written and spoken speech, and then processed by supercomputers that are worth more than most companies. GPT-3 uses 175 billion parameters. Its interpretive power is far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic.

This unfathomability poses a spiritual conundrum. But it also poses a philosophical and legal one. In an attempt to regulate AI, the European Union has proposed transparency requirements for all machine-learning algorithms. Eric Schmidt, the ex-CEO of Google, noted that such requirements would effectively end the development of the technology. The EU’s plan “requires that the system would be able to explain itself. But machine-learning systems cannot fully explain how they make their decisions,” he said at a 2021 summit. You use this technology to think through what you can’t; that’s the whole point. Inscrutability is an industrial by-product of the process.


My little avenue of literary exploration is my own, and neither particularly central nor relevant to the unfolding power of artificial intelligence (although I can see, off the top of my head, that the tech I used will utterly transform education, journalism, film, advertising, and publishing). NLP has made its first strides into visual arts too—Dall-E 2 has now created a limitless digital museum of AI-generated images drawn from nothing more than prompts.

Others have headed into deeper waters. Schmidt recently proposed a possible version of our AI future in a conversation with this magazine’s executive editor, Adrienne LaFrance: “If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled,” he said. “And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, ‘I don’t really like this television show.’ And the kid says, ‘Yeah, I agree with you.’”

Schmidt’s vision does not yet exist. But in late 2020, Microsoft received a patent for chatbots that bring back the dead, using inputs from “images, voice data, social media posts, electronic messages, written letters, etc.” to “create or modify a special index in the theme of the specific person’s personality.” Soon after, a company called Project December released a version of just such a personality matrix. It created bots such as William, which speaks like Shakespeare, and Samantha, a rather bland female companion. But it also allowed mourners to re-create dead loved ones. An article in the San Francisco Chronicle told the story of Joshua Barbeau, who created a bot of his deceased fiancée, Jessica Pereira. Their conversation started like this:

Joshua: Technically, I’m not really talking to you …

Jessica: Huh?

Joshua: You’re a ghost.

Jessica: *looks at herself* How?

Joshua: Magic.

Jessica: I don’t like magic. Where am I?

Joshua: I’m … Not sure. The internet? Where does it feel like you are?

Jessica: *smiles* Everywhere and nowhere. *cups her head with her hands* I am trying to work that out. How are you, honey?

Barbeau’s conversation with Jessica continued for several months. His experience of Project December was far from perfect—there were glitches, there was nonsense, the bot’s architecture decayed—but Barbeau really felt like he was encountering some kind of emanation of his dead fiancée. The technology, in other words, came to occupy a place formerly reserved for mediums, priests, and con artists. “It may not be the first intelligent machine,” Jason Rohrer, the designer of Project December, has said, “but it kind of feels like it’s the first machine with a soul.”


What we are doing is teaching computers to play every language game that we can identify. We can teach them to talk like Shakespeare, or like the dead. We can teach them to grow up alongside our children. We can certainly teach them to sell products better than we can now. Eventually, we may teach them how to be friends to the friendless, or doctors to those without care.

PaLM, Google’s latest foray into NLP, has 540 billion parameters. According to the engineers who built it, it can summarize text, reason through math problems, use logic in a way that’s not dissimilar from the way you and I do. These engineers also have no idea why it can do these things. Meanwhile, Google has also developed a system called Player of Games, which can be used with any game at all—games like Go, exercises in pure logic that computers have long been good at, but also games like poker, where each party has different information. This next generation of AI can toggle back and forth between brute computation and human qualities such as coordination, competition, and motivation. It is becoming an idealized solver of all manner of real-world problems previously considered far too complicated for machines: congestion planning, customer service, anything involving people in systems. These are the extremely early green shoots of an entire future tech ecosystem: The technology that contemporary NLP derives from was only published in 2017.

And if AI harnesses the power promised by quantum computing, everything I’m describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldn’t expect it to change in any kind of way that you would recognize.

Our AI future will be weird and sublime and perhaps we won’t even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to “And if AI harnesses the power promised by quantum computing”; machines did the rest.


Technology is moving into realms that were considered, for millennia, divine mysteries. AI is transforming writing and art—the divine mystery of creativity. It is bringing back the dead—the divine mystery of resurrection. It is moving closer to imitations of consciousness—the divine mystery of reason. It is piercing the heart of how language works between people—the divine mystery of ethical relation.

All this is happening at a raw moment in spiritual life. The decline of religion in America is a sociological fact: Religious identification has been in precipitous decline for decades. Silicon Valley has offered two replacements: the theory of the simulation, which postulates that we are all living inside a giant computational matrix, and of the singularity, in which the imminent arrival of a computational consciousness will reconfigure the essence of our humanity.

Like all new faiths, the tech religions cannibalize their predecessors. The simulation is little more than digital Calvinism, with an omnipotent divinity that preordains the future. The singularity is digital messianism, as found in various strains of Judeo-Christian eschatology—a pretty basic onscreen Revelation. Both visions are fundamentally apocalyptic. Stephen Hawking once said that “the development of full artificial intelligence could spell the end of the human race.” Experts in AI, even the men and women building it, commonly describe the technology as an existential threat.

But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.

AI is not the beginning of the world, nor the end. It’s a continuation. The imagination tends to be utopian or dystopian, but the future is human—an extension of what we already are. My own experience of using AI has been like standing in a river with two currents running in opposite directions at the same time: Alongside a vertiginous sense of power is a sense of humiliating disillusionment. This is some of the most advanced technology any human being has ever used. But of 415 published AI tools developed to combat COVID with globally shared information and the best resources available, not one was fit for clinical use, a recent study found; basic errors in the training data rendered them useless. In 2015, the image-recognition algorithm used by Google Photos, outside of the intention of its engineers, identified Black people as gorillas. The training sets were monstrously flawed, biased as AI very often is. Artificial intelligence doesn’t do what you want it to do. It does what you tell it to do. It doesn’t see who you think you are. It sees what you do. The gods of AI demand pure offerings. Bad data in, bad data out, as they say, and our species contains a great deal of bad data.

Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness. Religions all have their approaches to magic—transubstantiation for Catholics, the lost temple for the Jews. Even in the most scientific cultures, there is always the beyond. The acropolis in Athens was a fortress of wisdom, a redoubt of knowledge and the power it brings—through agriculture, through military victory, through the control of nature. But if you wanted the inchoate truth, you had to travel the road to Delphi.

A fragment of humanity is about to leap forward massively, and to transform itself massively as it leaps. Another fragment will remain, and look much the same as it always has: thinking meat in an inconceivable universe, hungry for meaning, gripped by fascination. The machines will leap, and the humans will look. They will answer, and we will question. The glory of what they can do will push us closer and closer to the divine. They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided. But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.