From 05a72b9bcf11e638ec6e38daa9dbefda2f9c09a2 Mon Sep 17 00:00:00 2001 From: udaij12 Date: Tue, 12 Mar 2024 16:27:04 -0700 Subject: [PATCH 1/6] documentation updates --- docs/M1_support.md | 62 ++++++++++++++++++++++++++++++++++++++++++++++ docs/Security.md | 2 ++ 2 files changed, 64 insertions(+) create mode 100644 docs/M1_support.md diff --git a/docs/M1_support.md b/docs/M1_support.md new file mode 100644 index 0000000000..7cfc5a3ab6 --- /dev/null +++ b/docs/M1_support.md @@ -0,0 +1,62 @@ +# M1 support + +## M1 Support +TorchServe supports Mac OS with M1 hardware. + +1. TorchServe CI jobs now include M1 hardware in order to ensure support, [documentation](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories) on github M1 hardware. + - [Regression Tests](https://github.com/pytorch/serve/blob/master/.github/workflows/regression_tests_cpu.yml) + - [Regression binaries Test](https://github.com/pytorch/serve/blob/master/.github/workflows/regression_tests_cpu_binaries.yml) +2. For [Docker](https://docs.docker.com/desktop/install/mac-install/) ensure Docker for Apple silicon is installed then follow [setup steps](https://github.com/pytorch/serve/tree/master/docker) +## Running Torchserve on M1 + +Follow [getting started documentation](https://github.com/pytorch/serve?tab=readme-ov-file#-quick-start-with-torchserve-conda) + +### Example + +``` +(myenv) serve % pip list | grep torch +torch 2.2.1 +torchaudio 2.2.1 +torchdata 0.7.1 +torchtext 0.17.1 +torchvision 0.17.1 +(myenv3) serve % conda install -c pytorch-nightly torchserve torch-model-archiver torch-workflow-archiver +(myenv3) serve % pip list | grep torch +torch 2.2.1 +torch-model-archiver 0.10.0b20240312 +torch-workflow-archiver 0.2.12b20240312 +torchaudio 2.2.1 +torchdata 0.7.1 +torchserve 0.10.0b20240312 +torchtext 0.17.1 +torchvision 0.17.1 + +(myenv3) serve % torchserve --start --ncs --models densenet161.mar --model-store ./model_store_gen/ +Torchserve version: 0.10.0 +Number of GPUs: 0 +Number of CPUs: 10 +Max heap size: 8192 M +Config file: N/A +Inference address: http://127.0.0.1:8080 +Management address: http://127.0.0.1:8081 +Metrics address: http://127.0.0.1:8082 +Initial Models: densenet161.mar +Netty threads: 0 +Netty client threads: 0 +Default workers per model: 10 +Blacklist Regex: N/A +Maximum Response Size: 6553500 +Maximum Request Size: 6553500 +Limit Maximum Image Pixels: true +Prefer direct buffer: false +Allowed Urls: [file://.*|http(s)?://.*] +Custom python dependency for model allowed: false +Enable metrics API: true +Metrics mode: LOG +Disable system metrics: false +CPP log config: N/A +Model config: N/A +System metrics command: default +... +Model server started. +``` diff --git a/docs/Security.md b/docs/Security.md index 530f0c8368..3afe7488db 100644 --- a/docs/Security.md +++ b/docs/Security.md @@ -5,6 +5,7 @@ | Version | Supported | |---------| ------------------ | | 0.9.0 | :white_check_mark: | +| 0.10.0 | :white_check_mark: | ## How we do security @@ -36,6 +37,7 @@ TorchServe as much as possible relies on automated tools to do security scanning 2. Using private-key/certificate files You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl) +6. TorchServe supports token authorization: check [documentaion](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information. From 7785046f978da5431f4ef80df7d1ee479d824675 Mon Sep 17 00:00:00 2001 From: udaij12 Date: Tue, 12 Mar 2024 16:36:02 -0700 Subject: [PATCH 2/6] adding more logs --- docs/M1_support.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/docs/M1_support.md b/docs/M1_support.md index 7cfc5a3ab6..bda10a7154 100644 --- a/docs/M1_support.md +++ b/docs/M1_support.md @@ -58,5 +58,17 @@ CPP log config: N/A Model config: N/A System metrics command: default ... +2024-03-12T15:58:54,702 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model densenet161 loaded. +2024-03-12T15:58:54,702 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: densenet161, count: 10 Model server started. +... +(myenv3) serve % curl http://127.0.0.1:8080/predictions/densenet161 -T examples/image_classifier/kitten.jpg + +{ + "tabby": 0.46661922335624695, + "tiger_cat": 0.46449029445648193, + "Egyptian_cat": 0.0661405548453331, + "lynx": 0.001292439759708941, + "plastic_bag": 0.00022909720428287983 +} ``` From 9b8c84ad8b39ce2144ebc191c7ae68ff7c15cdcc Mon Sep 17 00:00:00 2001 From: udaij12 Date: Tue, 12 Mar 2024 16:39:57 -0700 Subject: [PATCH 3/6] adding more logs --- docs/Security.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Security.md b/docs/Security.md index 3afe7488db..7723088b24 100644 --- a/docs/Security.md +++ b/docs/Security.md @@ -37,7 +37,7 @@ TorchServe as much as possible relies on automated tools to do security scanning 2. Using private-key/certificate files You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl) -6. TorchServe supports token authorization: check [documentaion](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information. +6. TorchServe supports token authorization: check [documentation](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information. From b1a2ed9e9debd7c41ff64d9fa58b819a1e541ea4 Mon Sep 17 00:00:00 2001 From: udaij12 Date: Tue, 12 Mar 2024 21:25:53 -0700 Subject: [PATCH 4/6] remove M1 docs --- docs/M1_support.md | 74 ---------------------------------------------- 1 file changed, 74 deletions(-) delete mode 100644 docs/M1_support.md diff --git a/docs/M1_support.md b/docs/M1_support.md deleted file mode 100644 index bda10a7154..0000000000 --- a/docs/M1_support.md +++ /dev/null @@ -1,74 +0,0 @@ -# M1 support - -## M1 Support -TorchServe supports Mac OS with M1 hardware. - -1. TorchServe CI jobs now include M1 hardware in order to ensure support, [documentation](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories) on github M1 hardware. - - [Regression Tests](https://github.com/pytorch/serve/blob/master/.github/workflows/regression_tests_cpu.yml) - - [Regression binaries Test](https://github.com/pytorch/serve/blob/master/.github/workflows/regression_tests_cpu_binaries.yml) -2. For [Docker](https://docs.docker.com/desktop/install/mac-install/) ensure Docker for Apple silicon is installed then follow [setup steps](https://github.com/pytorch/serve/tree/master/docker) -## Running Torchserve on M1 - -Follow [getting started documentation](https://github.com/pytorch/serve?tab=readme-ov-file#-quick-start-with-torchserve-conda) - -### Example - -``` -(myenv) serve % pip list | grep torch -torch 2.2.1 -torchaudio 2.2.1 -torchdata 0.7.1 -torchtext 0.17.1 -torchvision 0.17.1 -(myenv3) serve % conda install -c pytorch-nightly torchserve torch-model-archiver torch-workflow-archiver -(myenv3) serve % pip list | grep torch -torch 2.2.1 -torch-model-archiver 0.10.0b20240312 -torch-workflow-archiver 0.2.12b20240312 -torchaudio 2.2.1 -torchdata 0.7.1 -torchserve 0.10.0b20240312 -torchtext 0.17.1 -torchvision 0.17.1 - -(myenv3) serve % torchserve --start --ncs --models densenet161.mar --model-store ./model_store_gen/ -Torchserve version: 0.10.0 -Number of GPUs: 0 -Number of CPUs: 10 -Max heap size: 8192 M -Config file: N/A -Inference address: http://127.0.0.1:8080 -Management address: http://127.0.0.1:8081 -Metrics address: http://127.0.0.1:8082 -Initial Models: densenet161.mar -Netty threads: 0 -Netty client threads: 0 -Default workers per model: 10 -Blacklist Regex: N/A -Maximum Response Size: 6553500 -Maximum Request Size: 6553500 -Limit Maximum Image Pixels: true -Prefer direct buffer: false -Allowed Urls: [file://.*|http(s)?://.*] -Custom python dependency for model allowed: false -Enable metrics API: true -Metrics mode: LOG -Disable system metrics: false -CPP log config: N/A -Model config: N/A -System metrics command: default -... -2024-03-12T15:58:54,702 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model densenet161 loaded. -2024-03-12T15:58:54,702 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: densenet161, count: 10 -Model server started. -... -(myenv3) serve % curl http://127.0.0.1:8080/predictions/densenet161 -T examples/image_classifier/kitten.jpg - -{ - "tabby": 0.46661922335624695, - "tiger_cat": 0.46449029445648193, - "Egyptian_cat": 0.0661405548453331, - "lynx": 0.001292439759708941, - "plastic_bag": 0.00022909720428287983 -} -``` From 6e9b90acd27ce02b801ea36754e6213b2fcc0230 Mon Sep 17 00:00:00 2001 From: udaij12 Date: Tue, 26 Mar 2024 10:32:01 -0700 Subject: [PATCH 5/6] merged Security docs --- SECURITY.md | 2 ++ docs/Security.md | 47 ----------------------------------------------- 2 files changed, 2 insertions(+), 47 deletions(-) delete mode 100644 docs/Security.md diff --git a/SECURITY.md b/SECURITY.md index c9f460f0db..70f3c470ce 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -5,6 +5,7 @@ | Version | Supported | |---------| ------------------ | | 0.9.0 | :white_check_mark: | +| 0.10.0 | :white_check_mark: | ## How we do security @@ -44,6 +45,7 @@ TorchServe as much as possible relies on automated tools to do security scanning - Encoding: Convert special characters into safe representations. - Verification: Run tooling that identifies potential script injections (e.g. [models that detect prompt injection attempts](https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection)). 7. If you intend to run multiple models in parallel with shared memory, it is your responsibility to ensure the models do not interact or access each other's data. The primary areas of concern are tenant isolation, resource allocation, model sharing and hardware attacks. +8. TorchServe supports token authorization: check [documentation](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information. diff --git a/docs/Security.md b/docs/Security.md deleted file mode 100644 index 7723088b24..0000000000 --- a/docs/Security.md +++ /dev/null @@ -1,47 +0,0 @@ -# Security Policy - -## Supported Versions - -| Version | Supported | -|---------| ------------------ | -| 0.9.0 | :white_check_mark: | -| 0.10.0 | :white_check_mark: | - - -## How we do security - -TorchServe as much as possible relies on automated tools to do security scanning, in particular we support -1. Dependency Analysis: Using Dependabot -2. Docker Scanning: Using Snyk -3. Code Analysis: Using CodeQL - -## Important Security Guidelines - -1. TorchServe listens on the following ports - 1. HTTP - `8080`, `8081`, `8082` - 2. gRPC - `7070`, `7071` - - These ports are accessible to `localhost` by default. The address can be configured by following the [guide](https://pytorch.org/serve/configuration.html#configure-torchserve-listening-address-and-port) - TorchServe does not prevent users from configuring the address to be `0.0.0.0`. Please be aware of the security risks if you use `0.0.0.0` -2. TorchServe's Docker image is configured to expose the ports `8080`, `8081`, `8082`, `7070`, `7071` to the host by [default](https://github.com/pytorch/serve/blob/master/docker/Dockerfile). When these ports are mapped to the host, make sure to use `localhost` or a specific ip address. - -3. Be sure to validate the authenticity of the `.mar` file being used with TorchServe. - 1. A `.mar` file being downloaded from the internet from an untrustworthy source may have malicious code, compromising the integrity of your application - 2. TorchServe executes arbitrary python code packaged in the `mar` file. Make sure that you've either audited that the code you're using is safe and/or is from a source that you trust -4. By default TorchServe allows you to register models from all URLs. Make sure to set `allowed_urls` parameter in config.properties to restrict this. You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#other-properties) - - `use_env_allowed_urls=true` is required in config.properties to read `allowed_urls` from environment variable -5. Enable SSL: - - TorchServe supports two ways to configure SSL: - 1. Using a keystore - 2. Using private-key/certificate files - - You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl) -6. TorchServe supports token authorization: check [documentation](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information. - - - - -## Reporting a Vulnerability - -If you find a serious vulnerability please report it to opensource@meta.com and torchserve@amazon.com From 4ea1823d9c22bb18cf1a68d0461e0f2b575b4f22 Mon Sep 17 00:00:00 2001 From: udaij12 Date: Tue, 26 Mar 2024 11:02:08 -0700 Subject: [PATCH 6/6] changing table --- SECURITY.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/SECURITY.md b/SECURITY.md index 70f3c470ce..f0e7e8c179 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -4,8 +4,7 @@ | Version | Supported | |---------| ------------------ | -| 0.9.0 | :white_check_mark: | -| 0.10.0 | :white_check_mark: | +| 0.10.0 | :white_check_mark: | ## How we do security