Regulatory Challenges to Catastrophic AI Risk
Regulations may drive research underground where it is harder to monitor, or to ‘flag of convenience’ jurisdictions with lax restrictions, by embedding dangerous technologies within apparently benign cover operations (multipurpose technologies).
PERFECT IS THE ENEMY OF BETTER
Many factors influence the probability of regulatory effects upon catastrophic AI safety risks, with many different tradeoffs. Below I will outline the major factors as I perceive them.
Risk Reduction Factors:
Standard Setting: Regulations can set the bar for greater responsibility and accountability, and even standards can become soft law if incorporated into government tenders, or embedded with established practices and industry professional credentials. Improved standards and professionalism within industries can lead to improved governance and record-keeping.
Public Safety and Liability: The availability of insurance, security red teams, and crisis management facilities will tend to limit less-catastrophic risks, and may provide some survivable early warnings of imminent greater disaster.
Compounding iterations: The more developments in AI safety are made, generally the greater likelihood of developing the knowledge infrastructure necessary to mitigate catastrophic risk. The more that basic research into AI safety is undertaken and funded, with career opportunities in a newly-established formal research discipline, the greater likelihood of discovering advances that pave the way for eventual reduced catastrophic risks.
Rick Increase Factors:
Obfuscation: Regulations may drive research underground where it is harder to monitor, or to ‘flag of convenience’ jurisdictions with lax restrictions, by embedding dangerous technologies within apparently benign cover operations (multipurpose technologies), or by obfuscating the externalized effects of a system, such as in the vehicle emissions scandal (Wikipedia).
Arms race: Recent advances in machine learning such as multimodal abstractions models (aka Transformers, Large Language Models, Foundation Models) such as GPT-3 and DALL-E illustrate that dumping computing resources (and the funds for them) in colossal models seems to be a worthy investment. So far, there is no apparent limit or diminishing return on model size, and so now state and non-state actors are scrambling to produce the largest models feasible in order to access thousands of new capabilities never before possible. An arms race is afoot. Such arms races can lead to rapid and unexpected take-off in terms of AI capability, and the rush can blindside people to risks, especially when the loss of a race can mean an existential threat to a nation or organization.
Perverse incentives: Incentives can be powerful forces within organizations, and financialization, moral panic, or fear of political danger may cause irrational or incorrigible behavior of personnel within organizations.
Postmodern Warfare: Inexpensive Drones and other AI-enabled technologies have tremendous disruptive promise within the realm of warfare, especially given their asynchronous nature. Control of drone swarms must be performed using AI technologies, and this may encourage the entire theatre of war to be increasingly delegating to AI, perhaps including the interpretation of rules of engagement and grand strategy. (Lsusr, 2021)
Cyber Warfare: Hacking of systems is increasingly being augmented with machine intelligence (Cisomag, 2021), through GAN-enabled password crackers (Griffin, 2019) and advanced social engineering tools (Newman, 2021). This is equally the case in the realm of defense, where only machine intelligence may provide the swift execution required to defend systems from attack. A lack of international cyberwar regulations, and poor international policing of organized cybercrimes, may increase the risk of catastrophic risks to societal systems.
Zersetzung: The human mind is becoming a new theatre of war, through personalized generative propaganda, which may even extend to gaslighting attacks on targeted individuals, significantly leading to destabilization of societies (Williams, 2021). Such technologies are also plausibly deniable, being difficult to prove who may be responsible.
Inflexibility: The German Military after WW1 was not allowed to develop their artillery materiel, and so developed powerful rocket technologies instead, as these were not subject to regulation. Similarly, inflexible rules may permit exploitable loopholes. They may also not be sufficiently adaptive to allow for the implementation of new technologies and even improved industry standards.
Limitation of problem spaces: – It may be taboo to allow machine intelligence to work on sensitive issues or to be exposed to controversial (if potentially accurate) datasets. This may limit the ability of AI to make sense of out complex issues, and thereby frustrate finding solutions for crises.
Willful Ignorance: AI may be prevented from perceiving ‘biases’ which are actually uncomfortable truths as a result of political taboos. For example, it might be prevented from perceiving women as being physically less strong than men as a group, and such a blind spot could produce strange behavior, potentially leading to runaway effects.
Conclusions:
Greater transparency and accountability should be major factors in reducing catastrophic risk, as, all things being equal, it should be easier to know about the risks of systems, as well as who is culpable for any externalized effects.
On balance I would expect regulation to be generally a beneficial aspect for AI ethics, as long as it is not too inflexible or restrictive, or overly politicized.
It is very important that technology regulation NEVER becomes a polarizing issue. Broad, bi-partisan support must be developed if it is to be successful. Otherwise, a substantial proportion of the population will ignore it, whilst the other greater part applies it as a cudgel to harm people by willfully taking their behavior out of its proper context to unfairly label them as antisocial.
This article was originally featured on Nell Watson's Blog.
ExO Insight Newsletter
Join the newsletter to receive the latest updates in your inbox.