UPDATED 10:00 EDT / MARCH 12 2024

AI

How executive leaders can manage the impacts of the US executive order on AI

The recent U.S. Executive Order on Safe, Secure and Trustworthy AI has major implications for U.S. and non-U.S. entities across the public, private, academic, nongovernmental organization, and consumer and citizen ecosystems.

The EO also has an impact on U.S. federal agencies and those who work with them. It will undoubtedly have a significant impact on the global artificial intelligence vendor and solutions market, including how to address the new standards for the development, functionality, use and output of AI.

The EO sets forth new mandates, directives and guidance to ensure that developers and users of AI, including gen AI, proactively weigh AI value against AI harms to certain rights throughout the AI lifecycle. Consequently, the EO expands and repositions the risks of loss associated with responsible AI failures, which will affect current and future AI investments and the projected corresponding return on investment. With this in mind, public and private sector executive leaders should adjust leadership priorities, reconcile AI investment with redistributed risks of loss, and prepare today for a more regulated tomorrow.

Risk of loss from human harms

The EO calls out several specific human and societal harms from AI and gen AI interaction and use. Those include fraud, discrimination and threats to privacy. To mitigate these harms, the EO suggests that organizations conduct more proactive due diligence and monitor third-party AI services for greater transparency.

For U.S. federal agencies, the EO provides mitigation guidance by prioritizing the identification of approved AI use cases. The efforts, though, have to be continuous since the training data and respective outputs are dynamic. This requires a level of discipline that may be difficult to implement at scale.

For private sector executive leaders, the EO encourages increased focus on the AI use cases that incorporate harm reduction functionality, as well as proactive observability of AI outputs and gen AI artifacts. The primary use cases here should focus on balancing AI value with potential harm to individuals in their capacity as a citizen, consumer, employee and other similar roles that include the rights and liberties called out in the EO.

Prepare today for more regulated AI tomorrow

Through the EO, the U.S. is sending a clear signal that the impact of AI and gen AI is far more than a disruptive technology; it has far-reaching consequences for every aspect of daily life. As such, unlike many other areas of technology or disruptive forces where government organizations have a low reaction time, this is different (given the speed at which new directives to the government agencies and private sector must be acted upon) and executive leadership will be tested accordingly.

Nearly all U.S. government agencies will be required to appoint a chief artificial intelligence officer through the EO. This position will be responsible for their agency coordinating their use of AI, promoting trustworthy AI innovation and managing risks from their use of AI. The EO also requires that the chief AI officers for U.S. governmental agencies advance equity when developing, acquiring and using AI and automated systems in the federal government.

The EO will have a broad impact on AI strategy that will alter and redistribute the risk of loss from AI harms considered too costly for AI benefit or value by shifting the risk of bearing the financial and nonfinancial costs of harm from responsible AI failures.

Corporate, individual and machine behaviors arising from new regulatory frameworks will also be affected by the EO, alongside industry self-regulation we are already beginning to see take effect. Together, this creates opportunity for new market solutions, but because of the nature of the oversight of vendors by the government and the commercial sector, this will be demanding.

Organizations should prepare now for compliance, and more importantly for a safer, more trustworthy and more reliable AI future. Executive leaders can work with stakeholders — including legal, finance, government relations and the like — across their organizations to implement a comprehensive AI trust, risk and security management program. Optimally, a dedicated AI risk management office should be established to work across organizational lines such as data and analytics, legal, compliance and AI use case management and model development, among others.

Establish secure and trustworthy AI

The EO provides a potential glimpse as to the future vision, guidelines and standards for AI that are yet to be developed.

Executive leaders affected by the EO can prepare for safe, secure and trustworthy AI by:

  • Identifying the right players responsible for compliance with the EO’s direction. The typical participants in AI-related decisions include representatives of legal, compliance, risk management, privacy, security, procurement, DEI, D&A leaders, and SMEs. Most of these people are new to AI and they should acquire AI literacy. To meet the challenges of the EO, executive leaders can organize to include AI governance, AI ethics and responsible AI committees.
  • Making sure the decisions about AI are transparent. Transparency of AI-related decisions, such as why AI use cases, tools and approaches were selected and what aspects of decisions were discussed and agreed upon is a good immediate step — this involves documenting choices and validation procedures for each AI initiative or use case.
  • Ensuring a feedback loop for full transparency, where AI users or individuals affected by AI have an easy means to provide feedback. This will allow the capture of nuances and exceptions early on and use feedback to improve AI solutions. Even if an AI system design includes all the safe, secure and trustworthy AI considerations, the AI outcomes cannot be anticipated deterministically. Exception handling is a mechanism aimed at capturing what has not been accounted for within or between elements; but that is inevitable as a result of previously unknowable behavior.

Executive leaders should adapt their AI strategy and AI investment priorities to address new or elevated risks of harm and loss from responsible AI failures. Start by identifying and quantifying the potential AI risks of harm most relevant to stakeholder priorities. Prepare for EO-driven regulatory compliance and a safer AI future in general.

Overall, the scope of the EO is comprehensive when it comes to AI safety, trustworthiness and security. New regulations are underway and the private sector has agreed to voluntary commitments, which is expected to continue.

Organizations should focus on the messaging of the EO to get a head start on making their own AI applications and systems more safe, trustworthy and secure, well before the regulations apply to them directly. Executive leaders should start by adjusting leadership priorities, reconciling AI investment with redistributed risks of loss.

Lydia Clougherty Jones is a senior director analyst at Gartner Inc. in the Data & Analytics Group where she covers D&A strategy and D&A value. Gartner analysts will provide additional analysis on D&A strategy and navigating AI global regulatory dynamics at the Gartner Data & Analytics Summit, taking place this week in Orlando, Florida.

Image: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU