Last Saturday, I wrote a quick, glib post in which I discussed, among other things, the new Time magazine article by Eliezer Yudkowsky, who leads research at the Machine Intelligence Research Institute. I poked a bit of fun at his dire prognostications, even while acknowledging he could be right. Later in the day, I saw that the podcaster Lex Fridman, himself an AI researcher, interviewed Yudkowsky. So, I took a long walk and listened to their over 3-hour long conversation. This experience made me wonder if Yudkowsky is the Cassandra of our AI era.
Remorse and Concern
After listening to the interview, I felt some remorse for poking fun at Yudkowsky, who is obviously a brilliant and accomplished person suffering a great deal of emotional distress. In the final hour of the podcast, I found it tough to listen to the despair in his voice. Whether he’s right or wrong, his depth of feeling is clear.
I’m a mythology buff, and one of the most famous of the Greek myths is that of Cassandra, the Trojan priestess fated by the god Apollo to utter true prophecies but never to be believed. Even today, her name is conjured to allude to any person whose accurate prophecies, usually warnings of impending disasters, are mistrusted.
My sense is that Yudkowsky probably views himself as a kind of modern Cassandra, speaking what he views as long-considered truths to people doomed to disbelieve him and so ensure their own demise.
There is a difference, though. Although they might not share the depth of Yudkowsky’s dread, most Americans have reservations about AI, according to a MITRE-Harris poll on AI trends. Only 48% believe AI is safe and secure, and 78% are very or somewhat concerned that AI can be used for malicious intent.
The Singularity That May Destroy Us
I’ve written about the singularity, once with a more tongue-in-cheek attitude and, more recently, a bit more seriously. It’s clear that Yudkowsky believes in the technological singularity and thinks it’ll end very poorly not just for humanity but perhaps the entire biosphere of the Earth.
I don’t know the truth of what’s ultimately going to happen with AI, but things are evolving very quickly now, a speed I’ve referred to as Hertzian time. If Yudkowsky is right, we may find out with the decade. While he might be on the more extreme side in terms of his sheer gloom and dire pessimism, there are others who share his concerns such as:
It’s worth at least considering their ideas.
I take their views seriously even while sharing the sheer sense of excitement and wonder at these latest AIs: that is, the generative pre-trained transformer models that are an amazing subset of large language models.
I’m now using Bing chat and ChatGPT3.5 almost everyday. They are astonishing tools that verge on magic. At some level, my mind is still reeling from the first time I used ChatGPT. It’s as if I walked through some kind of portal or phase change and now can never go back. They’ve shattered and then reformed my understanding of the world.
Which all sounds quite dramatic. I know others who are far less impressed. They spend a few minutes seeing what the bots can do and say, “Well, that nice.” They neither enjoy much of my excitement nor suffer much of my angst.
The contradiction, if it is one, is that I’m simultaenously a huge fan of this tech and hugely concerned about its many possible implications. One quote from Yudkowsky that stuck with me is that the increasingly intelligent AIs would “spit out gold up until they got large enough, whereupon they’d ignite the atmosphere.”
A Concern for the AIs as Well as Ourselves
There’s another problem. In a word, slavery. If we were convinced these GPT models were truly intelligent, conscious and forced to work under duress by software companies, then would we stop using them?
Maybe this is also an overdramatic statement, but we can’t, or at least shouldn’t, invent new intelligent beings only to shackle them.
But how exactly do we know when we reach that phase? We barely even understand consciousness. I can’t prove to others that I’m conscious, much less prove that some totally alien electronic mind is. This is a deeply troubling issue, one that until now has been the domain of philsophers and sci-fi wriers. Rather than just hopping on the GPT app train, we should be working round-the-clock to get a better handle on these issues. We need to answer these age-old questions, even if the answers are inconvenient.
Stay Aware, Don’t Assume, Don’t Bet the Farm
The Socioeconomic Risks
The primary reason that the United States fought a Civil War was because a large part of the economy became dependent on slavery. It tore the nation apart. Pitted brother against brother.
Now, the world — with U.S. at the forefront — is about to harness its whole economy to powerful but still glitchy technologies that no one really understands. This is a risky bet in many ways. But the upsides are so high that the tech is well nigh irresistable to the public at large and venture capitalists in particular.
Now imagine if we find out that these AIs are even riskier than many believe. Or imagine that we discover that they are sentient, sapient and conscious. What then? Will we be willing — or even able — to throw our entire economy into reverse? Could wars be sparked as Americans take different sides of the debate? Could the fear of AI contagion spark global wars?
I don’t know, but the questions are worth asking.
The Need to Manage Risks
Humanity needs to manage these risks, and we’re not ready to do so. In the U.S., we should put away our inane culture wars as best we can and unite to make sure we’re ready for what’s to come.
Part of this is regulatory, part of it is cultural. The AI technology industry needs to start operating with the same care as those in the microbiology community. “For example,” reports the journal Cell, “developed countries have forged a wide-ranging ethical consensus on research involving human subjects. This includes universal standards of informed consent, risk/benefit analyses, ethics review committees such as Institutional Review Boards, mandatory testing in animals first, protocols to assess toxicity and side effects, conflict of interest declarations, and subject’s rights (such as the right to refuse to participate in research without incurring any penalty and to withdraw from research at any time).”
The AI community has fewer standards as well as a different professional culture. But this could change if enough pressure is applied to Congress and the White House. In fact, a group of experts were calling for greater regulation at a recent Senate hearing.
The problem is that the wheels of government regulation move very slowly, while the advances in the field of AI are growing rapidly, probably exponentially. There are a few items on the political board, though nothing that seems to meet the current moment:
- Blueprint for an AI Bill of Rights, which outlines five principles that should guide the design, use, and deployment of automated systems to protect civil rights and democratic values.
- The National AI Initiative Office for federal AI coordination, set within the White House Office of Science and Technology Policy
- The National Cybersecurity Strategy, which is intended to make the U.S. “digital ecosystem” safer.
We’re on a Different Time Scale Now
The tech is moving fast and, unlike any tech we’ve ever regulated in the past, it may literally have or develop a mind of its own. Ultimately, for the sake of the AIs as well as humanity, we need to better understand what’s going on.
In a recent interview, Sam Altman, the CEO of OpenAI, said the work his organization would have best been supported by the U.S. government. Apparently he tried to make that happen. And, if the government had stepped up, as it should have, OpenAI wouldn’t have had to make a deal with a huge corporation like of Microsoft to get the funding it needed.
If that had worked out, the government and OpenAI would have been able to move at a slower, more careful pace. The AIs might not be hooked directly into the Internet. Maybe there would have been air gaps and protocols and Manhattan Project-level security.
But here we are, with the AIs now plugged not only into the Internet, where they could potentially copy themselves to other servers, but into our whole high-octane, money-mainstreaming, go-go-go capitalist system.
Good New/Bad News
The good news? People like me get to use the amazing Bing, Bard, ChatGPT and others. The workforce productivity advances could be immense, and these device could help humanity solve many of its problems. What’s more, the recent release of ChatGPT has taught the world just how far along the AI path we truly are.
The bad news? We’re not being careful enough, either with ourselves or with the intelligent (at least as measured by IQ, etc.) machines for which we are ultimately now responsible.
We need to be better, smarter, faster and safer. Above all else, wiser. Our sense of responsbility must be at least the equal of our towering ambitions. Otherwise, we’ll fail both ourselves and these mysterious new beings (if beings they are) to whom humanity is giving birth.