On Oct. 30, 2023, President Biden issued an Executive Order (EO) on “safe, secure, and trustworthy artificial intelligence”. The EO follows the administration’s long-standing strategy of providing guidelines for “responsible innovation” with the goal of providing guidelines for the development and management of the inherent risks associated with this emergent technology. The guidelines detailed in the EO are not designed to curtail the advancement of artificial intelligence (AI) technology but aim to promote the responsible use of the technology and advance U.S. leadership in AI innovation and competition in the world.
What CohnReznick thinks
As the EO seeks to promote innovation and competition within the U.S. by calling on bilateral, multilateral, and multistakeholder organizations to collaborate, this guidance comes at a time when AI technology continues to evolve and gain wider acceptance. The EO is a solid step in the right direction with a focus on key areas of potential risks in AI such as unsafe or ineffective systems, data privacy, and algorithmic discrimination.
The more noteworthy provisions of the EO include:
- New standards for AI safety and security: According to the EO, developers of AI systems must share their safety test results and other critical information (such as the training model) with the U.S. government, and develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. Standards will be established to prevent using AI to engineer dangerous biological materials, and also protect Americans from AI-enabled fraud and deception. The EO also said that U.S. government will advance the use of AI tools to find and fix vulnerabilities in critical software.
- Protecting Americans’ privacy: The EO calls on the U.S. Congress to pass data privacy legislation to protect Americans’ privacy, including the risks posed by AI. This is definitely one of the biggest takeaways from this Executive Order. The U.S. as a whole does not have a nation-wide consumer privacy law like the European General Data Protection Regulation (GDPR). While some states such as California, Colorado, Connecticut, Utah, and Virginia have taken the initiative to create privacy laws for their residents, the absence of a national privacy law opens up the possibility of the passing of 50 different state privacy laws, potentially creating a compliance minefield for businesses.
- Advancing equity and civil rights: This section in the EO re-emphasizes the 2022 AI Bill of Rights to act as a guide to avoid the introduction of bias in AI code development. According to the EO, property owners, Federal benefits programs, and federal contractors will be required to keep AI algorithms from being used to exacerbate discrimination and equally ensure fairness throughout the criminal justice system.
- Advocating for consumers, patients, and students: The stipulation calls for the advancement of the responsible use of AI for its potential to transform education.
- Supporting workers: The U.S. work environment is experiencing dangers of increased workplace surveillance, bias, and job displacement. According to the EO, AI technology can prevent employers from undercompensating workers, unfairly evaluating job applications, or denying workers’ the ability to organize.
The adoption of AI technology in our day-to-day lives, as well as in businesses seeking that competitive edge, is expected to grow rapidly. The benefits are enormous and unquantifiable at the moment, but so are the inherent risks. The call to action by the White House is a welcome first step in developing guardrails for the use of AI. Organizations are still better served using a risk-based approach backed by elements of corporate governance and ethics at the early stages of deploying AI technology.
Organizations should consider the following when moving forward with the use of AI tools:
- Pay close attention to cybersecurity risk assessments for new and emerging technologies, and institute industry best practices using a prioritized, risk-based approach.
- Work with strategic planning teams in substantiating AI use cases, business challenges, key value drivers, and competitive advantages.
- Build a knowledge base within the organization by partnering with third-party organizations and individuals in the AI space, as part of proactively tracking updates on major developments and initiatives.
- The use of AI technology in biometrics such as facial recognition, fingerprinting, and voice recognition in our business and personal lives creates the risk of violating individual privacy that may ultimately introduce reputation and financial risks for organizations. AI technology must be used in a responsible manner that protects user privacy and enhances public trust without violating regulatory or compliance requirements. A well thought governance structure for enterprise AI program will go a long way in addressing these risks.
Subject matter expertise
CISA, CRISC, CGEIT, PMP, CDPSE, Principal, Global Leader, Cybersecurity, Technology Risk, and Privacy
Let’s start a conversation about your company’s strategic goals and vision for the future.
Please fill all required fields*
Please verify your information and check to see if all require fields have been filled in.