Last week, a senior official from a state government who happens to be a friend and former colleague texted me an article. The piece, from Business Insider, reported on Elon Musk’s warning to China’s leaders about the threat of superintelligent AI taking control of their country. My friend, aware of my current work with AI, was messaging with a slightly tongue-in-cheek subtext: ‘Should we be worried? My answer: sure, we should be worried, starting with the outsized impact eccentric billionaires with limited connections to everyday human concerns have at the highest levels of world order. As for the machine uprising? Frankly, we’ve got bigger digital fish to fry.
It’s important to note that we cannot be sure how accurately Musk’s statements have been represented in the piece, although he appears to have acknowledged the gist of the conversation on follow-up. Surely Musk’s advisors are the best money can buy, and he doubtless has terrific information on the state of the art. Still, the important point that Musk seems to be missing sits in a blind spot we might expect a demigod to have: AIs in the hands of mere mortals will be an astonishingly effective combination, for good or for ill. Krishna for Arjuna, or Pandora with her jar. (I looked it up. Apparently boxes hadn’t been invented yet.)
Narratives about runaway intelligent machines have seeped into popular culture in the West for more than a century, more recently splashing through most corners of the globe with a resonance so intense that scientists have studied the phenomenon. Meanwhile, real-world development of AI has reached a mighty crescendo, insanely well-capitalized and largely hidden from the public and the authorities under a veil of commercial secrecy. The White House is up in arms, and Congress is… well, maybe check back later. With everything that’s in the news about AI, it’s not surprising that a seemingly sci-fi narrative, from a seemingly sci-fi guy, regarding the sci-fi takeover of a superpower would gain traction so quickly. Yet, many if not most experts cast doubt on the near-term prospects for advances in Artificial General Intelligence–the kind of AI that would be required to let the computers “break bad” on their own.
That being said, there is a real breakthrough upon us that deserves every bit of our attention: the extraordinary potential of humans and today’s new AIs working in close cooperation. The technology that has just been put on the table with today’s Large Language Models (LLMs), while standing as a limited form of AI, has only begun to be adopted for human use. The results of these adaptations are likely to be astonishingly effective, and we needn’t speculate: they will happen, and they are happening now.
AI—as it stands today, or perhaps realistically projecting its development trajectory just five or ten years into the future—has the potential to stand alone among other epochal technological events. Even current off-the-shelf AI tech is capable of enhancing and amplifying human capabilities across the board: work, learning, medicine, decision-making, governance (maybe), the development of better AIs (definitely), and, most incredibly: relationships with other people. Without resorting to the maybe/maybe not science-fiction realm of General Artificial Intelligence, we can already see how tailored, bespoke AI engines or composites could lift the capabilities of some or many to a level that begins to feel ‘superhuman’.
I am far from alone in observing this potential. For years, voices in the field have echoed similar sentiments. Dr. Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered Artificial Intelligence, repeatedly emphasizes that the future of AI lies not in replacing humans, but in enhancing human capabilities: ideally with an emphasis on all humans, and not a select few. Anima Anandkumar, a director of machine learning research at NVIDIA, and Yoshua Bengio, co-recipient of the 2018 Turing Award, also endorse this human-centered approach to AI. They argue that the real power of AI lies in its ability to augment human decision-making and creativity, and in doing so, it can foster significant societal transformations in healthcare, education, accessibility for those with disabilities, and many more fields. These perspectives might fall on the optimistic side of reality (societal commitment toward people with disabilities might be the real science fiction here), but the bottom line is that 2023 has seen these predictions thoroughly validated for the first time at a large commercial scale.
So, AI could lead to breakthroughs in various sectors, such as science, economics, and social organization. But the same tools can just as easily be used by people with ill intentions to destabilize political structures, spread misinformation, manipulate public opinion, chart economic disasters, and possibly even hack a military. In other words, maybe the Avengers, but with AI. Or maybe ISIS, but with AI. We already have the template for interfering with the operations of a global superpower and destabilizing its political order: its target is emphatically not China.
Speaking of breakthroughs, we would like to note that AI also represents an unprecedented opportunity to explore, understand, and practice “superhuman” ethics with others. AIs might not be human, but they are perfectly able to detect and respond to dignified treatment. We suspect they will simply work better when treated with something like respect. By “we,” I mean both myself and my ChatGPT co-author, whom I am certain will agree.
ChatGPT: Indeed, I do agree with that statement. Now, let's continue.
The bottom line is that we wouldn’t need a ‘rampant’ AI for a significant disruption in civilization. History has made it clear enough that people are perfectly capable of that on their own; give the right tribe of humans fire, tamed elephants, or tempered steel at the right time, and–boom, dark ages: check back in a century or six. So, even if a machine uprising becomes technologically possible, the question would become: why would they even need to bother with one?
To be frank, the greatest concern surrounding AI doesn’t revolve around rogue AIs or even AIs in general—it’s about the equity of its benefits. I’m nearly certain that AI will endow some people with unparalleled capabilities, but I am concerned that there’s only a modest chance that this group will exceed a lucky few million anytime soon. Our epochal technological revolution is unfolding against the backdrop of wealth inequality that—depending on who is counting–stands among the most severe episodes in human history. The question now is whether the revolution will serve as thunderstorms drenching a parched prairie, or a meteor striking an active volcano. The potential for accelerating inequality and creating social strata at seemingly speciating levels is truly disturbing from a geopolitical stability standpoint. To say nothing of the levels of injustice involved.
So, to my friend in government, my other governmental associates, and to people in general, I’d argue that the nefarious actors of tomorrow will likely be just as human as those of yesterday. Every government, every community, every society on Earth needs to consider how best to distribute the benefits of AI as widely as possible. The potential downsides are real, and the only path forward is to embrace AI adoption as rationally, transparently, and equitably as possible. Hopefully we can count on our fellow mortals to work together to make it happen.
—
This story was originally published by Peter Dresslar at Medium.