Christopher Nolan Shares His Thoughts On Generative AI And How It Compares To Oppenheimer’s Story

Nolan lays out the similarities and differences between Generative AI and the development of the atomic bomb.

Christopher Nolan

Christopher Nolan has largely been a man of science. He depicts some pretty larger than life concepts on cinema in a believable manner. Even his most outlandish films are always grounded in a certain sense of reality. I’ve always had a curiosity to know what Nolan’s thoughts would be on the Generative AI hype-train that’s running rampant over the last few months. The world of technology has progressed at a fanatic pace over the last few months. The advent of ChatGPT has led to an onslaught of AI driven tools from Midjourney, Stable Diffusion, Google’s Bard and Microsoft’s own CoPilot. So, I wondered if Nolan ever had the chance or the time to be up to speed about any of these developments and if yes, what would be his take on it.

Fortunately, I didn’t have to wait too long. Ahead of Oppenheimer which bows into traditional and IMAX theaters next month, Nolan has already begun doing press to promote the movie. In one of such interviews, done by Maria Strehinsky of Wired, Nolan gets candid about several of these topics and shares some pretty insightful, at times philosophical views on what he makes of the whole situation. More importantly, he compares them to the development of the atomic bomb depicted in Oppenheimer and how these two eras, despite their similarities, contrast with each other.

During the interview, Wired’s Maria talks about a TED conference she attented in Vancouver about Generative AI. To her surprise, the first thing that most folks brought up in comparison is the developments at Los Alamos that led to the atomic bomb. The comparison was drawn out more expressly by a technologist who concluded the talk advocating for “better AI weapons”. However, Nolan elucidated that the difference between these two scenarios is that the creation of the atomic bomb was governed by laws of nature, and so was ultimately destined to happen.

There is a fundamental difference. The scientists dealing with the splitting of the atom kept trying to explain to the government, This is a fact of nature. God has done this. Or the creator or whoever you want it to be. This is Mother Nature. And so, inevitably, it’s just knowledge about nature. It’s going to happen. There’s no hiding it. We don’t own it. We didn’t create it. They viewed it as that. And I think you’d be very hard-pressed to make that argument about AI.

Nolan then comments that these problems have been out there for quite some time. Yet, media professionals and journalists have shown little interest in covering them. However, the emergence of ChatGPT has seen a sudden surge of interest in AI technologies which is now rather selfishly motivated by it’s ability to displace their jobs.

Well, the growth of AI in terms of weapons systems and the problems that it is going to create have been very apparent for a lot of years. Few journalists bothered to write about it. Now that there’s a chatbot that can write an article for a local newspaper, suddenly it’s a crisis.

Nolan then lays out the real problem with AI. It’s not in the tool or the algorithm itself, but the fact that it’s being used in a way that absolves humans of all accountability and places the blame squarely on AI’s shoulders. Part of this desire is motivated by humanity’s need to deify the next gigantic advancement, be it in technology or other fields. This motivates an irresponsible use of AI, which has far more potential for leading to catastrophe. The way I see it, in layman’s terms, it’s similar to handing a gun to everyone and letting them run rampant without any consequences, and then blaming any mass shootings on the weapon instead of the one who wielded it.

That’s part of the problem. Everybody has a very—call it a partisan point of view. The issue with AI, to me, is a very simple one. It’s like the term algorithm. We watch companies use algorithms, and now AI, as a means of evading responsibility for their actions.

If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions—militarily, socio­economically, whatever. The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook. I don’t know what the mythological underpinnings of this are, but throughout history there’s this tendency of human beings to create false idols, to mold something in our own image and then say we’ve got godlike powers because we did that.

One solution that tech companies are embracing, or at least giving an impression of embracing it, is regulation. Such regulatory oversight, tech CEOs argue, could lead to a more responsible use of AI. Again, Nolan cynically points out that the reason tech companies are favoring this path is because of their awareness that it’s not as simple for governments to regulate this stuff. This is where he draws parallels to Oppenheimer, who worked with the system to try and set up a body to regulate the development of nuclear weapons in an effort to avoid kicking off an arms race.

The thing with Oppenheimer is that he very much saw the role of scientists postwar as being the experts who had to figure out how to regulate this power in the world. And when you see what happened to him, you understand that that was never going to be allowed to happen. It’s a very complicated relationship between science and government, and it’s never been more brutally exposed than in Oppenheimer’s story. I think there are all kinds of lessons to be learned from it.

So he tried to work from within the establishment and not just turn around and say, you know, what we need is love or whatever. He was very practical in his approach, but he still got crushed. It’s very complex, and I think from our inventors now, it’s very disingenuous for them to say, “We need to be regulated.”

Nolan shares a lot more in the interview, such as Oppenheimer’s views on development of the Hydrogen Bomb and delves more into the subject of regulatory oversight. The entire interview is a crackling read and certainly worth checking out for any Nolan fan worth his salt.

Oppenheimer releases in traditional and IMAX theaters on July 21, 2023.

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

Tom Cruise filming the bike jump stunt on the set of Mission: Impossible - Dead Reckoning Part One