Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create an operating model for AI development that is compatible with the governance framework #17

Open
ColinEberhardt opened this issue Jul 12, 2024 · 1 comment

Comments

@ColinEberhardt
Copy link
Collaborator

ColinEberhardt commented Jul 12, 2024

At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.

@torinvdb
Copy link
Contributor

At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.

NCSC and CISA have put together comprehensive guidelines for this purpose. Perhaps we could lean on some of the standards set there? Just a quick thought, but instead of developing an entirely new model, we predicate our "safe/secure" AI practices on the Secure Design, Development, Deployment, & Operation/Maintenance framework they use in the guidelines?

https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Todo
Development

No branches or pull requests

2 participants