-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrent client requests #33
Comments
@frol I believe that the |
@zackangelo is correct! So pretty much this comes down to how tower handles back pressure. Internally we need a &mut self within the future to keep checking if our inner service is ready to accept the next request. To do this we need to borrow the service exclusively because only one access to that service can submit the request when its ready. To fix this internally we use a buffer channel to allow you to multiplex this concept of polling ready and handling back pressure. Hopefully this explains it a bit better. |
Thank you for the thorough explanation! Just to confirm, I was able to use use futures::join;
pub mod hello_world {
tonic::include_proto!("helloworld");
}
use hello_world::{client::GreeterClient, HelloRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let main_client = GreeterClient::connect("http://[::1]:50051")?;
let mut client = main_client.clone();
let res1 = client.say_hello(tonic::Request::new(HelloRequest { name: "hello1".into() }));
let mut client = main_client.clone();
let res2 = client.say_hello(tonic::Request::new(HelloRequest { name: "hello2".into() }));
println!("RESPONSES={:?}", join!(res1, res2));
Ok(())
} Could you help me with a solution to fetch a list of resources concurrently (e.g. I have a vector of pub mod hello_world {
tonic::include_proto!("helloworld");
}
use hello_world::{client::GreeterClient, HelloRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let main_client = GreeterClient::connect("http://[::1]:50051")?;
let mut client = main_client.clone();
let res1 = client.say_hello(tonic::Request::new(HelloRequest { name: "hello1".into() }));
let mut client = main_client.clone();
let res2 = client.say_hello(tonic::Request::new(HelloRequest { name: "hello2".into() }));
println!("RESPONSES={:?}", futures::future::join_all(vec![res1, res2]).await);
Ok(())
}
And even if that would work, I would still need to store the |
@frol this works: #[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let main_client = GreeterClient::connect("http://[::1]:50051")?;
let mut client1 = main_client.clone();
let mut client2 = main_client.clone();
let res1 = client1.say_hello(tonic::Request::new(HelloRequest {
name: "hello".into(),
}));
let res2 = client2.say_hello(tonic::Request::new(HelloRequest {
name: "world".into(),
}));
let responses = futures::future::try_join_all(vec![res1, res2]).await?;
println!("RESPONSE={:?}", responses);
Ok(())
} |
Super-helpful... thank you. If it was in a loop? I'm generating the req/resp from the protobufs, and tonic::response::Response - I cannot clone it. This definitely inspired me to test the streaming approach, which works swimmingly. But, a batch of requests and handling their responses, does this require an |
@blittable see #44 (comment) for a possible solution |
I think we can close this, feel free to reopen if you have any more questions :) |
Rust and tonic noob here, but if |
@ettersi |
Right, I forgot that it has to be a bitwise copy. Sorry about the spam, then. |
Quick question, given that the |
@gabrik see previous info #33 (comment) |
We ensure the client is cloned as it is a lightweight clone and enables concurrency. See hyperium/tonic#33
We ensure the client is cloned as it is a lightweight clone and enables concurrency. See hyperium/tonic#33
We ensure the client is cloned as it is a lightweight clone and enables concurrency. See hyperium/tonic#33
Feature Request
Motivation
It is common to have a need to do concurrent client requests, but it seems that Grpc client dispatcher is implemented to handle one call at a time, so the following example does not work by design:
Proposal
It seems that there is a need for multiplexing and a pool of clients.
The text was updated successfully, but these errors were encountered: