Explainable AI: Everything You Need to Reduce Generative AI Risks

Published by
The Ulap Team
on
February 27, 2024 12:17 PM

Generative AI is growing rapidly.

Thanks to Large Language Models (LLMs) like Open AI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, users are integrating Generative AI into their everyday lives.

Getting answers to simple questions, translating words or sentences, writing research papers, developing custom computer code, and even generating images or artwork is all possible thanks to Generative AI and LLMs.

It’s in our web browsers, emails, SaaS products, social media platforms, and file systems.

Even the Department of Defense (DOD) is navigating the rapid advancements of Generative AI.

With Generative AI being accessible everywhere and with users inputting business data into various GPTs, it poses an important question for commercial enterprise organizations and the DOD:

Is there a way to make Generative AI more secure and trustworthy?

The Risks of Generative AI

Generative AI and LLMs have significant limitations that open commercial enterprises and DOD organizations to risk.

Models can provide copyright-protected data, outdated data, or even hallucinate — giving inaccurate answers to mission-critical questions.

Users do not have visibility into why an AI model provided a specific response, nor do they have any way to trace the output of an AI model. This means it’s nearly impossible to ensure the data you receive is accurate, let alone validate the response provided by the model.

Source: https://www.darpa.mil/program/explainable-artificial-intelligence

Our research within the DOD has uncovered the following areas of risk with the deployment of Generative AI models:

  • Data Source: Data used in model training and updates could include copyright materials, information from countries or organizations with alternative views, classified information, PHI (Protected Health Information), CUI (Controlled Unclassified Information), or other sensitive data sources that are not intended for inclusion in Generative AI models.

  • Model Governance: Generative AI models provided by commercial organizations, including OpenAI,  do not provide insights into how models are developed, trained, monitored, or tuned. Without an understanding of these processes, the goals and outputs of the model can be misinterpreted by end-users.

  • Model Transparency: Generative AI models evaluate numerous data points before providing an output to the end user.  In many cases, the model must evaluate multiple options before providing an output to the user. Commercial offerings do not provide visibility to model uncertainty, provide an explanation, context around the response, or alternative responses that the model could have provided.

  • Model Biases: Generative AI systems might exhibit biases influenced by social and demographic variations from their training datasets and algorithmic structures. If not properly addressed, these models have the potential to absorb and magnify pre-existing societal prejudices related to race, gender, age, and ethnicity, among other factors, embedded in source data.

Many organizations already have Generative AI technologies within their processes but lack guidelines or policies on how those capabilities should be used.

Take Microsoft Office and GitHub as examples.

Both companies have Generative AI capabilities embedded into their products. Users can access tools and wizards that accelerate their daily tasks but are unaware of how the model uses the data they provide.

This means a few things:

  • They may be feeding confidential or private information into the model
  • The output might include copyrighted or inaccurate data
  • Their data may not be secure and private

For some organizations, this may not be an issue. However, this poses a significant safety concern for many commercial organizations and the DOD.

So, how do you circumvent the risks associated with Generative AI?

Simple — implement an Explainable AI model.

What is Explainable AI?

Many organizations are investing in Explainable AI (XAI) to make their Generative AI models more secure.

Simply put, Explainable AI gives human users transparency and visibility into all aspects of the AI model. This allows them to understand and trust interactions with AI models, especially the model outputs.

Explainable AI provides 7 key areas for understanding AI, including:

  • Transparency: do stakeholders understand the model’s decisions in terms of formats and language
  • Causality: do the predicted changes in the output due to input perturbations also occur in the actual system?
  • Privacy: can the protection of sensitive user information be guaranteed?
  • Fairness: can it be verified that model decisions are fair over protected groups?
  • Trust: how confident are the human users in working with the system?
  • Usability: How capable is the system of providing a safe and effective environment for users to perform their tasks?
  • Reliability: how robust is the system performance against changes in parameters and inputs?
Source: https://www.researchgate.net/publication/365954123_Explainable_AI_A_review_of_applications_to_neuroimaging_data

Additional findings, including Investigating Explainability of Generative AI for Code through Scenario-based Design published by Jiao Sun, et al, provide goals, frameworks, research, and live interviews with end users to understand all aspects of developing, deploying, and operating trustworthy AI capabilities.

Reducing Risk with Explainable AI

Users within the DOD and commercial enterprises who need to be able to trust outputs from Generative AI models require a detailed understanding of the model.

This includes:

  • Where the data is sourced
  • Visibility into the model algorithms
  • How models are developed, trained, monitored, and tuned

The image below details how Explainable AI provides high-level capabilities to explain the model output to the end-user.

Source: https://www.darpa.mil/program/explainable-artificial-intelligence

The Department of Defense and Explainable AI

The DoD, in particular, requires applications to support mission-critical capabilities in support of all aspects of its daily operations and, more importantly, mission planning and execution. 

Any errors or interruptions can drastically impact operations, giving our adversaries a tactical advantage. 

DOD end users require visibility and transparency for all interactions with generative AI models.

Financial Services and Explainable AI

You can also look at the Financial Services industry. Analysts and individual traders are always looking for assets that will perform better in their portfolios.

An accomplished analyst isn’t going to take an AI model recommendation for stocks to add to their client portfolios.

They want the background on the suggested stock, how it was selected, what other stocks were evaluated for selection, and the confidence the model has in that stock.

Explainable AI’s Foundational Capabilities

Explainable AI provides several foundational capabilities to help organizations build, deploy, and operate AI models. 

These include:

  • Accuracy: Data that correctly reflects proven, true values or the specified action, person, or entity. This includes data structure, content, and variability.
  • Completeness: The data present at a specified time must contain the expected information or statistics measured at the data set, row, or column level.
  • Conformity: Data sets follow agreed-upon internal policies, standards, procedures, and architectural requirements.
  • Consistency: The degree to which a value is uniformly represented within and across data sets.
  • Uniqueness: Ensures a one-to-one alignment between each observed event and the record that describes such an event.
  • Integrity: A data set’s pedigree, provenance, and lineage are known and aligned with relevant business rules.
  • Timeliness: Measures the time between an event occurring and the data’s availability for use.
  • Model Governance: To validate that the model complies with local legal requirements and DoD data handling requirements of classified and unclassified data, the organization must define a framework and implement an automated process to track the lineage of data ingested and processed by the model.  Additionally, the framework must monitor and track human and system interactions with the model.
  • Model Transparency: Model transparency is the most critical component for implementing trustworthy AI. It aims to break down the "black box" of AI, where users don’t understand how the algorithm or model produced the output, what options the model evaluated, or the confidence of the response provided by the model. Key categories to support Model transparency were identified and documented in the XAI Question Bank published in Questioning the AI: Informing Design Practices for Explainable AI User Experiences by Liao Et Al.
Source: arXiv:2001.02478 [cs.HC]

Implementing Explainable AI

Generative AI is a powerful tool for commercial enterprises and Department of Defense organizations, but it does come with serious risks.

Implementing an Explainable AI model provides transparency into all aspects of the generative AI model — allowing users to understand all aspects of developing, deploying, and operating trustworthy AI capabilities.

Here at Ulap, we are updating our Machine Learning Workspace to include critical Explainable AI capabilities. 

The goal, as always, is to provide an AI/ML Platform that provides trustworthy generative AI to the DOD and enterprise users.