Replies: 4 comments 1 reply
-
Here's Rob from the AI guide. ISO defines AI as Therefore it rules out optimization systems that perform tasks like scheduling, planning . (Note that optimization as a mechanism is at the root of machine learning because the training process optimizes the model parameters to achieve the best performance). The result is that AI under ISO consists of machine learning and so-called heuristic systems. ISO 5338 also discusses this. Heurstic systems that reason based on rules or patterns programmed by people instead of learning from data. They used to be called expert-systems, and currently they represent a relatively small part of the AI systems in production. When it comes to model attacks, heuristic systems of course don't suffer from data poisoning per se, but they do suffer from evasion attacks and model theft. So to answer your question: The MLSec Top 10 is by definition about machine learning which includes deep learning and generative AI models such as LLMs, and excludes reinforcement learning and heuristic systems. It may be good to cover some of these definitions in the guiding text with the Top 10, and explain that heuristic systems DO suffer from a few of the top 10 entries. To wrap up, let's look at the AI act. It defines AI as 'software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.' Annex I says: (b)Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c)Statistical approaches, Bayesian estimation, search and optimization methods. This is consistent with ISO, with the exception of the very last bit: 'Search and optimization methods'. Search and optimization is typically an Operations Research (OR) problem, and certainly there is overlap with the field of AI. From a security threat perspective I see optimization systems as a very different use case from machine learning and heuristic systems. It is better to leave them out of top 10 discussion to prevent create unnecessary noise, and cover them separately if there is a demand. |
Beta Was this translation helpful? Give feedback.
-
Having just worked with a team working on an LLM - the difference we felt the most is that there's usually an un-trusted user interacting with an LLM - since the LLM is meant to process language and typically interacts directly with a user in a human language. In contrast, ML efforts we have worked with are often further back in the software stack. The machine learning code may be quietly learning from data stores and making recommendations or taking automated actions - without as much direct human interaction. A lot of the risks are the same, but there's enough difference that I currently expect my team to eventually maintain separate guides for LLM and ML vulnerability discussions. |
Beta Was this translation helpful? Give feedback.
-
Hi @robvanderveer that is a wonderful summary. can you please clarify this.
To add, would it be apt to say
I also agree to your AI definition from AI act as, software that is developed with ML models to complete a task with human precision or better. |
Beta Was this translation helpful? Give feedback.
-
created wiki page for glossary: https://github.com/OWASP/www-project-machine-learning-security-top-10/wiki/Glossary entries from wiki's glossary page will be added to the 'glossary' tab: https://owasp.org/www-project-machine-learning-security-top-10/
ref: #16 |
Beta Was this translation helpful? Give feedback.
-
Is there a common set of guidelines and agreed taxonomy to define the difference between topics such as:
Some of these topics may be limited to the research/scientific realm and not for security practitioners.
❓ ## Questions for discussion
Beta Was this translation helpful? Give feedback.
All reactions