Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Security Policy with extra security recommendations #3041

Merged
merged 1 commit into from
Mar 26, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,14 @@ TorchServe as much as possible relies on automated tools to do security scanning
2. Using private-key/certificate files

You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl)
6. Prepare your model against bad inputs and prompt injections. Some recommendations:
1. Pre-analysis: check how the model performs by default when exposed to prompt injection (e.g. using [fuzzing for prompt injection](https://github.com/FonduAI/awesome-prompt-injection?tab=readme-ov-file#tools)).
2. Input Sanitation: Before feeding data to the model, sanitize inputs rigorously. This involves techniques such as:
- Validation: Enforce strict rules on allowed characters and data types.
- Filtering: Remove potentially malicious scripts or code fragments.
- Encoding: Convert special characters into safe representations.
- Verification: Run tooling that identifies potential script injections (e.g. [models that detect prompt injection attempts](https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection)).
7. If you intend to run multiple models in parallel with shared memory, it is your responsibility to ensure the models do not interact or access each other's data. The primary areas of concern are tenant isolation, resource allocation, model sharing and hardware attacks.



Expand Down
Loading