

The recent U.S. Executive Order on Safe, Secure and Trustworthy AI has major implications for U.S. and non-U.S. entities across the public, private, academic, nongovernmental organization, and consumer and citizen ecosystems.
The EO also has an impact on U.S. federal agencies and those who work with them. It will undoubtedly have a significant impact on the global artificial intelligence vendor and solutions market, including how to address the new standards for the development, functionality, use and output of AI.
The EO sets forth new mandates, directives and guidance to ensure that developers and users of AI, including gen AI, proactively weigh AI value against AI harms to certain rights throughout the AI lifecycle. Consequently, the EO expands and repositions the risks of loss associated with responsible AI failures, which will affect current and future AI investments and the projected corresponding return on investment. With this in mind, public and private sector executive leaders should adjust leadership priorities, reconcile AI investment with redistributed risks of loss, and prepare today for a more regulated tomorrow.
The EO calls out several specific human and societal harms from AI and gen AI interaction and use. Those include fraud, discrimination and threats to privacy. To mitigate these harms, the EO suggests that organizations conduct more proactive due diligence and monitor third-party AI services for greater transparency.
For U.S. federal agencies, the EO provides mitigation guidance by prioritizing the identification of approved AI use cases. The efforts, though, have to be continuous since the training data and respective outputs are dynamic. This requires a level of discipline that may be difficult to implement at scale.
For private sector executive leaders, the EO encourages increased focus on the AI use cases that incorporate harm reduction functionality, as well as proactive observability of AI outputs and gen AI artifacts. The primary use cases here should focus on balancing AI value with potential harm to individuals in their capacity as a citizen, consumer, employee and other similar roles that include the rights and liberties called out in the EO.
Through the EO, the U.S. is sending a clear signal that the impact of AI and gen AI is far more than a disruptive technology; it has far-reaching consequences for every aspect of daily life. As such, unlike many other areas of technology or disruptive forces where government organizations have a low reaction time, this is different (given the speed at which new directives to the government agencies and private sector must be acted upon) and executive leadership will be tested accordingly.
Nearly all U.S. government agencies will be required to appoint a chief artificial intelligence officer through the EO. This position will be responsible for their agency coordinating their use of AI, promoting trustworthy AI innovation and managing risks from their use of AI. The EO also requires that the chief AI officers for U.S. governmental agencies advance equity when developing, acquiring and using AI and automated systems in the federal government.
The EO will have a broad impact on AI strategy that will alter and redistribute the risk of loss from AI harms considered too costly for AI benefit or value by shifting the risk of bearing the financial and nonfinancial costs of harm from responsible AI failures.
Corporate, individual and machine behaviors arising from new regulatory frameworks will also be affected by the EO, alongside industry self-regulation we are already beginning to see take effect. Together, this creates opportunity for new market solutions, but because of the nature of the oversight of vendors by the government and the commercial sector, this will be demanding.
Organizations should prepare now for compliance, and more importantly for a safer, more trustworthy and more reliable AI future. Executive leaders can work with stakeholders — including legal, finance, government relations and the like — across their organizations to implement a comprehensive AI trust, risk and security management program. Optimally, a dedicated AI risk management office should be established to work across organizational lines such as data and analytics, legal, compliance and AI use case management and model development, among others.
The EO provides a potential glimpse as to the future vision, guidelines and standards for AI that are yet to be developed.
Executive leaders affected by the EO can prepare for safe, secure and trustworthy AI by:
Executive leaders should adapt their AI strategy and AI investment priorities to address new or elevated risks of harm and loss from responsible AI failures. Start by identifying and quantifying the potential AI risks of harm most relevant to stakeholder priorities. Prepare for EO-driven regulatory compliance and a safer AI future in general.
Overall, the scope of the EO is comprehensive when it comes to AI safety, trustworthiness and security. New regulations are underway and the private sector has agreed to voluntary commitments, which is expected to continue.
Organizations should focus on the messaging of the EO to get a head start on making their own AI applications and systems more safe, trustworthy and secure, well before the regulations apply to them directly. Executive leaders should start by adjusting leadership priorities, reconciling AI investment with redistributed risks of loss.
Lydia Clougherty Jones is a senior director analyst at Gartner Inc. in the Data & Analytics Group where she covers D&A strategy and D&A value. Gartner analysts will provide additional analysis on D&A strategy and navigating AI global regulatory dynamics at the Gartner Data & Analytics Summit, taking place this week in Orlando, Florida.
THANK YOU