Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mapping threats and Controls to Existing risk frameworks #32

Open
alwell-kevin opened this issue Jul 9, 2024 · 7 comments
Open

Mapping threats and Controls to Existing risk frameworks #32

alwell-kevin opened this issue Jul 9, 2024 · 7 comments
Assignees
Labels
📚 governance-framework ❓ question Further information is requested

Comments

@alwell-kevin
Copy link

alwell-kevin commented Jul 9, 2024

In an effort to ensure our risk identification and mitigation framework adheres to industry standards we intend to map our known risks and mitigations to common industry standard security frameworks such as NIST or the CRI Profile.

This should manifest as a column on the risk/mitigations table to associate the row with a domain of inquiry from a common framework.

Tasks

No tasks being tracked yet.
@alwell-kevin alwell-kevin self-assigned this Jul 9, 2024
@lucaborella89
Copy link
Contributor

@lucaborella89
Copy link
Contributor

Chamindra, Andy and Kevin to support here

@lucaborella89
Copy link
Contributor

Gibson + Luca to work on the mapping with NIST and move to .md

@lucaborella89
Copy link
Contributor

Check the category from Oleg's email link

@gkocak-scottlogic
Copy link

I'm working on mappings (not limited to NIST) as well to support this in addition to the threat/control definitions. Looking forward for the markdown version I can contribute to it as well.

@finos-admin finos-admin transferred this issue from another repository Aug 29, 2024
@gibsonlam
Copy link
Contributor

gibsonlam commented Sep 10, 2024

The comment above regarding "mapping with NIST" was specifically about the existing categories used (Confidentiality, Integrity, and Availability), and whether those can/should be expanded, to help with organizing and consuming the information. It was suggested to check if NIST has existing categories that can be leveraged.

The NIST risk categories that are most applicable would be from the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.

The categories are:

  1. CBRN Information or Capabilities: Eased access to or synthesis of materially nefarious information or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) weapons or other dangerous materials or agents.
  2. Confabulation: The production of confidently stated but erroneous or false content (known colloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.
  3. Dangerous, Violent, or Hateful Content: Eased production of and access to violent, inciting, radicalizing, or threatening content as well as recommendations to carry out self-harm or conduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging or stereotyping content.
  4. Data Privacy: Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of biometric, health, location, or other personally identifiable information or sensitive data.
  5. Environmental Impacts: Impacts due to high compute resource utilization in training or operating GAI models, and related outcomes that may adversely impact ecosystems.
  6. Harmful Bias or Homogenization: Amplification and exacerbation of historical, societal, and systemic biases; performance disparities8 between sub-groups or languages, possibly due to non-representative training data, that result in discrimination, amplification of biases, or incorrect presumptions about performance; undesired homogeneity that skews system or model
    outputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful biases.
  7. Human-AI Configuration: Arrangements of or interactions between a human and an AI system which can result in the human inappropriately anthropomorphizing GAI systems or experiencing algorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI systems.
  8. Information Integrity: Lowered barrier to entry to generate and support the exchange and consumption of content which may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale dis- and mis-information campaigns.
  9. Information Security: Lowered barriers for offensive cyber capabilities, including via automated discovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber operations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may compromise a system’s availability or the confidentiality or integrity of training data, code, or model weights.
  10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or licensed content without authorization (possibly in situations which do not fall under fair use); eased exposure of trade secrets; or plagiarism or illegal replication.
  11. Obscene, Degrading, and/or Abusive Content:_Eased production of and access to obscene, degrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse material (CSAM), and nonconsensual intimate images (NCII) of adults.
  12. Value Chain and Component Integration: Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not processed and cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users.

There are also categories from the NIST Cybersecurity Framework, but they do not cover the scope required for a Gen AI application.

@torinvdb
Copy link
Contributor

torinvdb commented Nov 26, 2024

I've put together a list of relevant frameworks and guidelines that might help guide our discussions. There's a lot out there, and most of it is subject to change. I think that if we are going to effectively perform a review/sanity check of what we care about (in scope for the governance framework), we'll need to break this down functionally. For example:

  • What frameworks or guidelines should I look into for CSP-specific AI services?
  • What frameworks or guidelines should I look into for emerging AI threats?
  • What toolkits exist that can help me mitigate the threats with the controls suggested by the framework?

Questions like these should be discussed in the SIG so a best approach can be agreed upon. In believe that there is value in providing both high-level guidance (during the intro/preamble?) and linking these resources to relevant threats/controls throughout the AI Governance Framework. Perhaps even categorizing these frameworks based on something like what @gibsonlam elicited from Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.

Relevant Frameworks and Guidelines

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📚 governance-framework ❓ question Further information is requested
Projects
Status: In Progress
Development

No branches or pull requests

6 participants