Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skip send response if grpc channel is closed by client #2420

Merged
merged 5 commits into from
Jun 24, 2023
Merged

Conversation

lxning
Copy link
Collaborator

@lxning lxning commented Jun 22, 2023

Description

Please read our CONTRIBUTING.md prior to creating your first pull request.

Please include a summary of the feature or issue being fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes #(issue)
#2321

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test
  1. add close in ts_scripts/torchserve_grpc_client.py
def infer_stream(stub, model_name, model_input):
    channel = grpc.insecure_channel("localhost:7070")
    stub = inference_pb2_grpc.InferenceAPIsServiceStub(channel)

    with open(model_input, "rb") as f:
        data = f.read()

    input_data = {"data": data}
    responses = stub.StreamPredictions(
        inference_pb2.PredictionsRequest(model_name=model_name, input=input_data)
    )

    try:
        for resp in responses:
            channel.close()
            prediction = resp.prediction.decode("utf-8")
            print(prediction)
    except grpc.RpcError as e:
        exit(1)
  1. python ts_scripts/torchserve_grpc_client.py infer_stream echo_stream examples/text_classification/sample_text.txt
  2. log
2023-06-22T12:15:47,169 [WARN ] grpc-default-executor-1 org.pytorch.serve.grpcimpl.InferenceImpl - grpc client call already cancelled
2023-06-22T12:15:47,169 [WARN ] grpc-default-executor-1 org.pytorch.serve.grpcimpl.InferenceImpl - grpc client call already cancelled
2023-06-22T12:15:47,169 [INFO ] grpc-default-executor-1 ACCESS_LOG - /[0:0:0:0:0:0:0:1]:54959 "gRPC org.pytorch.serve.grpc.inference.InferenceAPIsService/StreamPredictions HTTP/2.0" 1 55
2023-06-22T12:15:47,460 [WARN ] W-9000-echo_stream_1.0 org.pytorch.serve.job.Job - grpc client call already cancelled, not able to send this response for requestId: 2084f086-f313-4809-8619-3f30feff23b4
2023-06-22T12:15:47,460 [WARN ] W-9000-echo_stream_1.0 org.pytorch.serve.job.Job - grpc client call already cancelled, not able to send this response for requestId: 2084f086-f313-4809-8619-3f30feff23b4
2023-06-22T12:15:47,461 [DEBUG] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: false
2023-06-22T12:15:47,461 [DEBUG] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: false
2023-06-22T12:15:47,764 [WARN ] W-9000-echo_stream_1.0 org.pytorch.serve.job.Job - grpc client call already cancelled, not able to send this response for requestId: 2084f086-f313-4809-8619-3f30feff23b4
2023-06-22T12:15:47,764 [WARN ] W-9000-echo_stream_1.0 org.pytorch.serve.job.Job - grpc client call already cancelled, not able to send this response for requestId: 2084f086-f313-4809-8619-3f30feff23b4
2023-06-22T12:15:47,765 [DEBUG] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: false
2023-06-22T12:15:47,765 [DEBUG] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: false
2023-06-22T12:15:48,069 [DEBUG] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-06-22T12:15:48,069 [DEBUG] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-06-22T12:15:48,069 [INFO ] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 905
2023-06-22T12:15:48,069 [INFO ] W-9000-echo_stream_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 905

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@lxning lxning self-assigned this Jun 22, 2023
@lxning lxning changed the title skip send response if grpc channel closed skip send response if grpc channel is closed by client Jun 22, 2023
@lxning lxning added the bug Something isn't working label Jun 22, 2023
@lxning lxning added this to the v0.9.0 milestone Jun 22, 2023
@codecov
Copy link

codecov bot commented Jun 22, 2023

Codecov Report

Merging #2420 (8393f47) into master (603e89f) will increase coverage by 0.10%.
The diff coverage is n/a.

❗ Current head 8393f47 differs from pull request most recent head 7e03592. Consider uploading reports for the commit 7e03592 to get more accurate results

@@            Coverage Diff             @@
##           master    #2420      +/-   ##
==========================================
+ Coverage   71.78%   71.89%   +0.10%     
==========================================
  Files          78       78              
  Lines        3654     3654              
  Branches       58       58              
==========================================
+ Hits         2623     2627       +4     
+ Misses       1027     1023       -4     
  Partials        4        4              

see 2 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants