We must erect guardrails to protect against AI’s many risks to society
Artificial intelligence is rife with risks.
Some of these may stem from design limitations in a specific buildout of the technology. Others may be the result of inadequate runtime governance over live AI apps. Still others may be intrinsic to the technology’s inscrutable “black box” complexity.
Wikibon refers to this overarching societal concern as “AI risk management.” Generally, this refers to the myriad ways in which the technology may adversely impact society, as well as the various technological, procedural, regulatory and other guardrails to mitigate the most worrisome threats. Check out this recent Wikibon Action Item for a wide-ranging CrowdChat on this topic.
AI’s principal risks to society include:
- Can we prevent AI from invading people’s privacy?
- Can we eliminate socioeconomic biases that may be baked into AI-driven applications?
- Can we ensure that AI-driven processes are entirely transparent, explicable and interpretable to average humans?
- Can we engineer AI algorithms so that there’s always a clear indication of human accountability, responsibility and liability for their algorithmic outcomes?
- Can we build ethical and moral principles into AI algorithms so that they weigh the full set of human considerations into decisions that may have life-or-death consequences?
- Can we automatically align AI applications with stakeholder values, or at least build in the ability to compromise in exceptional cases, thereby preventing the emergence of rogue bots in autonomous decision-making scenarios?
- Can we throttle AI-driven decision making in circumstances where the uncertainty is too great to justify autonomous actions?
- Can we institute failsafe procedures so that humans may take back control when automated AI applications reach the limits of their competency?
- Can we ensure that AI-driven applications behave in consistent, predictable patterns, free from unintended side effects, even when they are required to dynamically adapt to changing circumstances?
- Can we protect AI applications from adversarial attacks that are designed to exploit vulnerabilities in their underlying statistical algorithms?
- Can we design AI algorithms that fail gracefully, rather than catastrophically, when the environment data departs significantly from circumstances for which they were trained?
AI risk mitigation has become a popular topic on the main stages at tech conferences. Researchers can tap into a growing pool of grants that fund innovative approaches for addressing the problem, much of it coming from the coffers of big technology vendors.
It’s a challenging time for legislators, policy analysts and others trying to bring coherence to the confusing, overlapping and sparse regulatory mechanisms for dealing with all of this. For a dissection of the likely global regulatory fallout around facial recognition, for example, check out my recent InformationWeek column on the topic.
AI risk management is the focus of a growing curriculum that’s essential study for the next generation of data scientists and other application developers. AI safeguards will almost certainly find their way into future waves of commercial devices, applications and cloud services, though it’s clear that these will need to coalesce into a broader body of risk mitigation practices in order to be effective.
If you’re a developer, certifying an AI application, service or product as a manageable risk is possible. However, as I discussed in this recent Dataversity article, it will need to address the following risk factors holistically:
- AI rogue agency: AI must always be under the control of the user or a designated third-party. Testing should certify that users can always rescind AI-driven decisioning agency in circumstances where the uncertainty is too great to justify autonomous actions.
- AI instability: AI’s foundation in machine learning means that much of its operation will be probabilistic and statistical in nature, rather than according to fixed, repeatable rules. It should be possible to certify that the AI fails gracefully, rather than catastrophically, when environment data departs significantly from circumstances for which they were trained.
- AI sensor blindspots: When AI is incorporated into robots, drones, self-driving vehicles and other sensor-equipped devices, there should be some indication to the consumer about the visuals, sounds, smells and other sensory inputs they’re unable to detect under realistic operating environments. Independent testing could have uncovered this risk, as well as any consequent risks from faulty collision avoidance and defensive maneuvering algorithms.
- AI privacy vulnerabilities: Considering that many AI-driven products, such as Amazon.com Inc.’s Alexa, are in the consumer end of the “internet of things,” there must be safeguards to prevent them from inadvertently invading people’s privacy or from exposing people to surveillance hack attacks by external parties.
- AI adversarial exposure: Vulnerabilities in deep neural networks can expose a company to considerable risk if they are discovered and exploited by third parties before defenses have been implemented. Testing should be able to certify that AI-infused products are able to withstand some of the most likely sources of adversarial attacks.
- AI algorithmic inscrutability: Many safety issues with AI may stem from the “black box” complexity of its algorithms. Independent testing of an AI product should call out the risks a consumer faces when using products that embed such algorithms. And there should be disclaimers on AI-driven products that are not ideally transparent, explainable and interpretable to average humans.
- AI liability obscurity: Just as every ingredient in the food chain may be traceable back to a source, the provenance of every AI component of a product should be transparent. Consumer confidence in AI-infused products rests on knowing that there’s always a clear indication of human accountability, responsibility and liability for their algorithmic outcomes. In fact, this will almost certainly become a legal requirement in most industrialized countries, so testing labs should start certifying products that ensure transparency of accountability.
There is a perfect storm of AI nasties just waiting to happen. The human race has barely begun to work through the disruptive consequences of this bubbling cauldron of risk. And let’s not overlook the trend toward AI’s weaponization, which poses an existential threat any way you look at it. In my recent SiliconANGLE column, I looked at the technology’s central role in military initiatives everywhere. Check out my recent Datanami article on the threat that AI-driven drones pose to civil defenses, in which I go into depth on advances in counterdrone technology.
Yes, we can protect society from many, if not all, of these AI downsides. However, many tradeoffs must be made and many people may find the resulting technological, regulatory and other remedies disproportionate to the peril. And we need political leadership everywhere who are themselves not going rogue on these matters.
But we would be naïve to believe that society can ever fully protect itself from all the adverse consequences that may befall us from our AI inventions. The sounder minds among us will have to erect guardrails to keep it all in check without denying humanity the many amazing fruits that AI promises.
Photo: U.S. Air Force
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU