You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.
The text was updated successfully, but these errors were encountered:
At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.
NCSC and CISA have put together comprehensive guidelines for this purpose. Perhaps we could lean on some of the standards set there? Just a quick thought, but instead of developing an entirely new model, we predicate our "safe/secure" AI practices on the Secure Design, Development, Deployment, & Operation/Maintenance framework they use in the guidelines?
At the moment we are drafting a governance framework that sets out the control required to safely develop and deploy AI applications. Ideally this would be accompanied by an operating model (i.e. more detailed guidance about tools, processes and approaches) for the "safe" development of AI applications, that would then (most likely) adhere to the governance framework.
The text was updated successfully, but these errors were encountered: