Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation on request cancellation #6403

Merged
merged 11 commits into from
Oct 11, 2023
3 changes: 3 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ The User Guide describes how to configure Triton, organize and configure your mo
* Collecting Server Metrics [[Overview](README.md#metrics) || [Details](user_guide/metrics.md)]
* Supporting Custom Ops/layers [[Overview](README.md#framework-custom-operations) || [Details](user_guide/custom_operations.md)]
* Using the Client API [[Overview](README.md#client-libraries-and-examples) || [Details](https://github.com/triton-inference-server/client)]
* Cancelling Inflight Inference Request [[Overview](README.md#cancelling-inflight-inference-request) || [Details](user_guide/request_cancellation.md)]
* Analyzing Performance [[Overview](README.md#performance-analysis)]
* Deploying on edge (Jetson) [[Overview](README.md#jetson-and-jetpack)]
* Debugging Guide [Details](./user_guide/debugging_guide.md)
Expand Down Expand Up @@ -165,6 +166,8 @@ Use the [Triton Client](https://github.com/triton-inference-server/client) API t
- [Java/Scala](https://github.com/triton-inference-server/client/tree/main/src/grpc_generated/java)
- [Javascript](https://github.com/triton-inference-server/client/tree/main/src/grpc_generated/javascript)
- [Shared Memory Extension](protocol/extension_shared_memory.md)
### Cancelling Inflight Inference Request
Triton can detect and handle request that have been cancelled from the client-side. This [document](user_guide/request_cancellation.md) discusses scope and limitations of the feature.
### Performance Analysis
Understanding Inference performance is key to better resource utilization. Use Triton's Tools to costomize your deployment.
- [Performance Tuning Guide](user_guide/performance_tuning.md)
Expand Down
74 changes: 74 additions & 0 deletions docs/user_guide/request_cancellation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
<!--
# Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->

# Request Cancellation

Starting from 23.10, Triton supports handling request cancellation received
from the gRPC client or a C API user. Long running inference requests such
as for auto generative large language models may run for an indeterminate
amount of time or indeterminate number of steps. Additionally clients may
enqueue a large number of requests as part of a sequence or request stream
and later determine the results are no longer needed. Continuing to process
requests whose results are no longer required can significantly impact server
resources.

[In-Process Triton Server C API](../customization_guide/inference_protocols.md#in-process-triton-server-api) has been enhanced with `TRITONSERVER_InferenceRequestCancel`
and `TRITONSERVER_InferenceRequestIsCancelled` to cancel and query the cancellation
status of an inflight request. Read more about the APIs in [tritonserver.h](https://github.com/triton-inference-server/core/blob/main/include/triton/core/tritonserver.h).

In addition, [gRPC endpoint](../customization_guide/inference_protocols.md#httprest-and-grpc-protocols) can
now detect cancellation from the client and attempt to terminate request.
At present, only gRPC python client supports issuing request cancellation
to the server endpoint. See [request-cancellation](https://github.com/triton-inference-server/client#request-cancellation)
for more details on how to issue requests from the client-side.
See gRPC guide on RPC [cancellation](https://grpc.io/docs/guides/cancellation/) for
finer details.


Upon receiving request cancellation, triton does its best to cancel request
at various points. However, once a request has been given to the backend
for execution, it is upto the individual backends to detect and handle
request termination.
Currently, following backend(s) support(s) early termination:
- [vLLM backend](https://github.com/triton-inference-server/vllm_backend)
- [python backend](https://github.com/triton-inference-server/python_backend)

Python backend is a special case where we expose the APIs to detect cancellation
status of the request but it is upto the `model.py` developer to detect whether
the request is cancelled and terminate further execution.

**For the backend developer**: The backend APIs have also been enhanced to let the
backend detect whether the request received from Triton core has been cancelled.
See `TRITONBACKEND_RequestIsCancelled` and `TRITONBACKEND_ResponseFactoryIsCancelled`
in [tritonbackend.h](https://github.com/triton-inference-server/core/blob/main/include/triton/core/tritonbackend.h)
for more details. The backend upon detecting request cancellation can stop processing
it any further.
The python models running behind python backend can also query the cancellation status
of request and response_sender. See [this](https://github.com/triton-inference-server/python_backend#request-cancellation-handling)
section in python backend documentation for more details.