Superintelligence is not required for AI to cause harm. That is already happening. AI is used to violate privacy, create and spread disinformation, compromise cyber-security and build biased decision-making systems. The prospect of military misuse of AI is imminent. Today’s AI systems help repressive regimes to carry out mass surveillance and to exert powerful forms of social control. Containing or reducing these contemporary harms is not only of immediate value, but is also the best bet for easing potential, albeit hypothetical. future x-risk.
It is safe to say that the AI which exists today is not superintelligent. But it is possible that AI will be made superintelligent in the future. Researchers are divided on how soon that may happen, or even if it will. Still, today’s AI models are impressive, and arguably possess a form of intelligence and understanding of the world; otherwise they would not be so useful. Yet they are also easily fooled, liable to generate falsehoods and sometimes fail to reason correctly. As a result, many contemporary harms stem from AI’s limitations, rather than its capabilities.
It is far from obvious whether AI, superintelligent or not, is best thought of as an alien entity with its own agency or as part of the anthropogenic world, like any other technology that both shapes and is shaped by humans. But for the sake of argument, let us assume that at some point in the future a superintelligent AI emerges which interacts with humanity under its own agency, as an intelligent non-biological organism. Some x-risk-boosters suggest that such an AI would cause human extinction by natural selection, outcompeting humanity with its superior intelligence.
Intelligence surely plays a role in natural selection. But extinctions are not the outcomes of struggles for dominance between “higher” and “lower” organisms. Rather, life is an interconnected web, with no top or bottom (consider the virtual indestructibility of the cockroach). Symbiosis and mutualism—mutually beneficial interaction between different species—are common, particularly when one species depends on another for resources. And in this case, AIs depend utterly on humans. From energy and raw materials to computer chips, manufacturing, logistics and network infrastructure, we are as fundamental to AIs’ existence as oxygen-producing plants are to ours.
Perhaps computers could eventually learn to provide for themselves, cutting humans out of their ecology? This would be tantamount to a fully automated economy, which is probably neither a desirable nor an inevitable outcome, with or without superintelligent AI. Full automation is incompatible with current economic systems and, more importantly, may be incompatible with human flourishing under any economic regime—recall the dystopia of Pixar’s “Wall-E”.
Luckily, the path to automating away all human labour is long. Each step offers a bottleneck (from the AIs’ perspective) at which humans can intervene. In contrast, the information-processing labour which AI can perform at next to no cost poses both great opportunity and an urgent socioeconomic challenge.
Some may still argue that AI x-risk, even if improbable, is so dire that prioritising its mitigation is paramount. This echoes Pascal’s wager, the 17th-century philosophical argument which held that it was rational to believe in God, just in case he was real, so as to avoid any possibility of the terrible fate of being condemned to hell. Pascal’s wager, both in its original and AI versions, is designed to end reasoned debate by assigning infinite costs to uncertain outcomes.
In a utilitarian analysis, in which costs are multiplied by probabilities, infinity times any probability other than zero is still infinity. Hence accepting the AI x-risk version of Pascal’s wager might lead us to conclude that AI research should be stopped altogether or tightly controlled by governments. This could curtail the nascent field of beneficial AI, or create cartels with a stranglehold on AI innovation. For example, if governments passed laws limiting the legal right to deploy large generative language models like ChatGPT and Bard to only a few companies, those companies could amass unprecedented (and undemocratic) power to shape social norms, and the ability to extract rent on digital tools that are likely to be critical to the 21st-century economy.
Perhaps regulations could be designed so as to reduce the potential for x-risk while also attending to more immediate AI harms? Probably not; proposals to curb AI x-risk are often in tension with those directed at existing AI harms. For instance, regulations to limit the open-source release of AI models or datasets make sense if the goal is to prevent the emergence of an autonomous networked AI beyond human control. However, such restrictions may handicap other regulatory processes, for instance those for promoting transparency in AI systems or preventing monopolies. In contrast, regulation which takes aim at concrete, short-term risks—such as requiring AI systems to honestly disclose information about themselves—will also help to mitigate longer-term, and even existential, risks.
Regulators should not prioritise existential risk posed by superintelligent AI. Instead, they should address the problems which are in front of them, making models safer and their operations more predictable in line with human needs and norms. Regulations should focus on preventing inappropriate deployment of AI. And political leaders should reimagine a political economy which promotes transparency, competition, fairness and the flourishing of humanity through the use of AI. That would go a long way to curbing today’s AI risks, and be a step in the right direction in mitigating more existential, albeit hypothetical, risks.
Blaise Agüera y Arcas is a Fellow at Google Research, where he leads a team working on artificial intelligence. This piece was co-written with Blake Richards, an associate professor at McGill University and a CIFAR AI Chair at Mila – Quebec AI Institute; Dhanya Sridhar, an assistant professor at the Université de Montréal and a CIFAR AI Chair at Mila – Quebec AI Institute; and Guillaume Lajoie, an associate professor at the Université de Montréal and a CIFAR AI Chair at Mila – Quebec AI Institute.
©️ 2023, The Economist Newspaper Limited. All rights reserved.
From The Economist, published under licence. The original content can be found on www.economist.com
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.