-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mapping threats and Controls to Existing risk frameworks #32
Comments
Started the mapping with NIST here https://docs.google.com/spreadsheets/d/1ZOWupn6LPPYxVz-wfuAJ48BTVvPjWgOUitTbzhXCmMc/edit?gid=910937723#gid=910937723 |
Chamindra, Andy and Kevin to support here |
Gibson + Luca to work on the mapping with NIST and move to .md |
Check the category from Oleg's email link |
I'm working on mappings (not limited to NIST) as well to support this in addition to the threat/control definitions. Looking forward for the markdown version I can contribute to it as well. |
The comment above regarding "mapping with NIST" was specifically about the existing categories used (Confidentiality, Integrity, and Availability), and whether those can/should be expanded, to help with organizing and consuming the information. It was suggested to check if NIST has existing categories that can be leveraged. The NIST risk categories that are most applicable would be from the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. The categories are:
There are also categories from the NIST Cybersecurity Framework, but they do not cover the scope required for a Gen AI application. |
I've put together a list of relevant frameworks and guidelines that might help guide our discussions. There's a lot out there, and most of it is subject to change. I think that if we are going to effectively perform a review/sanity check of what we care about (in scope for the governance framework), we'll need to break this down functionally. For example:
Questions like these should be discussed in the SIG so a best approach can be agreed upon. In believe that there is value in providing both high-level guidance (during the intro/preamble?) and linking these resources to relevant threats/controls throughout the AI Governance Framework. Perhaps even categorizing these frameworks based on something like what @gibsonlam elicited from Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. Relevant Frameworks and Guidelines
|
In an effort to ensure our risk identification and mitigation framework adheres to industry standards we intend to map our known risks and mitigations to common industry standard security frameworks such as NIST or the CRI Profile.
This should manifest as a column on the risk/mitigations table to associate the row with a domain of inquiry from a common framework.
Tasks
The text was updated successfully, but these errors were encountered: