AI as a gamechanger: Institute guardrails around data, ethics, and compliance

GCS_Cyber-AI_web-banner

Today, artificial intelligence (AI) technology is rapidly evolving and gaining wide acceptance. This is evidenced by the application of AI in everyday life and work, brought about by the emergence of generative AI platforms and large language models (LLMs) such as ChatGPT and Google Bard, which allow an individual to have “human-like conversations” fueled by the chatbots’ ability to generate text, video, or images in near-real time.

With every innovative technology comes inherent risks. At times, technology change can move too fast, before risks can be evaluated and guardrails put in place. Especially as these AI tools are evolving quickly, there are likely risks we don’t yet properly understand – much less understand how to mitigate. At a bare minimum, it is critical to be proactive in understanding the risks we do know about and doing our best to address them.

Organizations should consider steps to address these top risks before allowing generative AI platforms to support their business.

Untrustworthy data 

Generative AI at its core works by ingesting and manipulating data. There are limitations to both that process of manipulation and the integrity of the underlying data. Hence, users need to take steps to confirm that their AI results are adequate in terms of process and source data.

As an example, one major flaw with current chatbots is that models are prone to “hallucination.” This presents as a confident response that makes no sense, often based on random or fabricated information. The tool may produce a response based on training data it has ingested that is false or misapplied in context. The tool will continue to produce additional responses using this same training data that a reasonable human being would discern as patently false with no semblance of reality.

Users must also be aware that not all training data used in generative AI is free of bias. All data sets have some inherent bias; a search engine, for example, may show certain results higher if paid to do so, and conversely may be less likely to show results from less-established resources. With many AI tools now pulling widely from across the internet, it’s harder to know the source of information, and whether the decision-making process behind the results was devoid of bias. The developer itself might not even be aware of it. Tools pulling only from a subset of information won’t fully represent the bigger picture; or, if pulling from data sets with gender-based, socioeconomic, or racial discrepancies, for example, they may reflect that same deficiency.

Takeaway: Trust but verify

While users generally can trust AI tools to provide results as intended, it’s important to not trust them 100%. Users can’t necessarily verify the inputs that their tools are working from, but they can keep checks on the outputs by remembering that there is a margin for error and verifying information where possible. This may include:

  • Checking facts or statistics against official/firsthand sources such as government, regulatory, or professional organization websites.
  • Where possible, questioning the developer of the tool – or at minimum checking their website – for information on what data was used to train the model.
  • To tread cautiously, organizations can purchase AI technology and train private models on their own data so that they know decisions are being made based only on their own trusted information.

Trust and privacy/ethical challenges

Again, AI tools are built on data – and thus hold the same risks as any other platform that holds data. Introducing sensitive or personally identifiable information (PII) into these programs may expose it to leaks, breaches, ransomware attacks, and similar threats from unauthorized sources. For example, Open AI, the developer of ChatGPT, recently announced a data leak affecting PII of some of its subscribers. Regulatory bodies responsible for privacy laws such as Europe’s General Data Protection Regulation (GDPR) have yet to come up with specific language to address this particular risk of AI data leak or exposure. We will likely witness many more leaks and breaches as we go down the path of increased AI use in organizations.

Another privacy/ethical consideration when using AI concerns copyright laws. It is increasingly common for users today to pass off material as “theirs” when in fact some element of AI use was involved. The U.S. Patent and Trade Organization recently reminded AI users that AI results and output do not fall under intellectual property rights: Human content can be copyright-protected, but the AI portion cannot.

Takeaway: Discussion, caution, and disclosures

AI tools generally hold the same risks as any other data-based tool, and accordingly the same controls should be applied. Users should be cautious about what data is being fed into an AI tool; for especially sensitive data, traditional tools that offer more control around security are the best option. When AI tools are used, clients should be provided with disclosures and disclaimers around how their data is being used and what risks it may be exposed to.

Similarly, when using AI to produce written content, organizations should be clear about what is AI-generated and what is human-written.

Absence of a structured governance model

AI use and development should be transparent for users. Users of an AI platform should have the confidence that ethics and a governance structure were factored into its development and deployment. This includes the adoption of policies/standards, controls, and a framework that enforce good cybersecurity hygiene.

AI adoption must respect individuals’ privacy rights, uphold data protection, and be devoid of discrimination.

Finally, AI use must be made safe for users. Like any other modern technology, AI relies on a dependable network. Insecure networks are back doors for threat actors seeking to infiltrate the environment.

Takeaway: Know your tool

As AI options proliferate, organizations must conduct due diligence before adopting one. Do the developers have posted statements about their ethics and governance? Is it a legal tool, built on legal data? Is there enforcement of best practices?

Again, for some organizations, the best route may be developing their own AI tools, by purchasing access to large language models (LLMs) and training them on their own information and data. In doing so, they should thoroughly document their processes and decision-making so that their users have confidence in their tools.

Insufficient regulation and industry standards

The absence of specific, enforceable laws or regulations addressing the development and use of AI can expose users to personal, legal, or even financial risks. No standards to follow means that there are no protections in place to shield users in the event of breaches, problematic results, or other negative developments.

However, there have been a number of frameworks developed to generally guide AI use and development in the interim that can provide a useful starting point for developing AI policies and procedures within an organization.

Blueprint for an AI Bill of Rights

In its Blueprint for an AI Bill of Rights, the White House Office of Science and Technology Policy outlined five principles that should guide the design, use, and deployment of AI. While the  framework is aimed primarily toward developers of AI tools, all users should do their diligence to confirm that tools meet these guidelines.

  • Safe and effective systems – The need to implement a risk management framework.
  • Algorithmic discrimination protections – Having a governance structure or ethical guidelines.
  • Data privacy – Multiple regulations, especially at the state and industry level, address this.
  • Notice and explanation – Guidelines for various disclosures that should be made to users about the use of automated systems.
  • Human alternatives, consideration, and fallback – The social responsibility aspect of the framework.

NIST AI Risk Management Framework

According to the National Institute of Standards and Technology (NIST) AI Risk Management Framework, characteristics of trustworthy AI systems are:

  • Valid and reliable
  • Safe, secure, and resilient
  • Fair, with bias managed
  • Transparent and accountable
  • Explainable and interpretable
  • Privacy-enhanced

Any organization developing their own AI tool – or customizing use of an existing one – should work toward meeting these designations.

Next steps for AI users

As society increases its use of this revolutionary technology, users must ask themselves:

  • How can I address the identified risks in my AI deployment and use?
  • How can I be sure the AI vendor’s data can be trusted, and confirm that I’m being provided with trustworthy results?

Senior leadership in organizations should consider the following when moving forward with use of AI tools:

  • Pay closer attention to cybersecurity risk assessments for new and emerging technologies, and institute industry best practices using a prioritized risk-based approach.
  • Develop a governance structure for enterprise AI implementation.
  • Work with their strategic planning teams in substantiating AI use cases, business challenges, key value drivers, and competitive advantages.
  • Build a knowledge base within the organization by partnering with third-party organizations and individuals in the AI space, as part of proactively tracking updates on major developments and initiatives.

Like the early days of automation, these early days of widespread open AI are like the Wild West: Users are to some extent on their own when it comes to safely and successfully implementing their use. There are currently no perfect answers to risk mitigation. In the meantime, organizations must remain a step ahead of the risks, and on top of the latest updates. Doing so will ultimately allow them to enjoy the benefits of AI while minimizing the dangers as best they can.

Contact our team for more information.

Contact

Bhavesh N. Vadhani, CISA, CRISC, CGEIT, PMP, CDPSE, Principal, Global Leader, Cybersecurity, Technology Risk, and Privacy

703.847.4418

Adonye Chamberlain, Manager, Cybersecurity, Technology Risk, and Privacy

703.744.7409

OUR PEOPLE

Get in touch with our specialists

View All Specialists
Bhavesh Vadhani

Bhavesh Vadhani

CISA, CRISC, CGEIT, PMP, CDPSE, Principal, Global Leader, Cybersecurity, Technology Risk, and Privacy

Looking for the full list of our dedicated professionals here at CohnReznick?

Close

Contact

Let’s start a conversation about your company’s strategic goals and vision for the future.

Please fill all required fields*

Please verify your information and check to see if all require fields have been filled in.

Please select job function
Please select job level
Please select country
Please select state
Please select industry
Please select topic
This has been prepared for information purposes and general guidance only and does not constitute legal or professional advice. Neither CohnReznick LLP or its personnel provide legal advice to third parties. You should not act upon the information contained in this publication without obtaining specific professional advice. No representation or warranty (express or implied) is made as to the accuracy or completeness of the information contained in this publication, and CohnReznick LLP, its members, employees, and agents accept no liability, and disclaim all responsibility, for the consequences of you or anyone else acting, or refraining to act, in reliance on the information contained in this publication or for any decision based on it.