-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Retrieve token from server in GCS client [4/n] #35014
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for cleaning up the code! I think there are two things here:
- The token shouldn't be generated in the first step. We should connect to the store first and then try to fetch the token. If it's not there, we generate and store it.
- The token should be in the RPC layer, not the application layer. It means we should add it to client context in client call. And in grpc server whenever we got a call, we should check the token first and if it's good, we shall proceed by calling the callback otherwise just return with failure.
I think this PR could just focus on the first one.
e70a53c
to
15c95c8
Compare
+1 in just introducing the cluster id. |
+1 for cluster_id concept |
5d3b1b7
to
385d4c8
Compare
It doesn't matter for now since the change doesn't plumb through the Cluster-ID propagation to other services, but in the long run it would probably be good for each service to have some sort of verification/auth with its client. But this decision would affect whether or not we need a shared_future for ClientCallManager.
|
596091c
to
a1abc5c
Compare
b04fe8f
to
246cbc5
Compare
b7bdbe1
to
bff887d
Compare
[core] Add ClusterID token to GRPC server [1/n] (ray-project#36517) First of a stack of changes to plumb through token exchange between GCS client and server. This adds a ClusterID token that can be passed to a GRPC server, which then initializes each component GRPC service with the token by passing to the ServerCallFactory objects when they are set up. When the factories create ServerCall objects for the GRPC service completion queue, this token is also passed to the ServerCall to check against inbound request metadata. The actual authentication check does not take place in this PR. Note: This change also minorly cleans up some code in GCS server (changes a string check to use an enum). Next change (client-side analogue): ray-project#36526 [core] Generate GCS server token Signed-off-by: vitsai <victoria@anyscale.com> Add client-side logic for setting cluster ID. Signed-off-by: vitsai <victoria@anyscale.com> bug fixes Signed-off-by: vitsai <victoria@anyscale.com> comments Signed-off-by: vitsai <victoria@anyscale.com> bug workaround Signed-off-by: vitsai <victoria@anyscale.com> Fix windows build Signed-off-by: vitsai <victoria@anyscale.com> fix bug Signed-off-by: vitsai <victoria@anyscale.com> remove auth stuff from this pr Signed-off-by: vitsai <victoria@anyscale.com> fix mock build Signed-off-by: vitsai <victoria@anyscale.com> comments Signed-off-by: vitsai <victoria@anyscale.com> remove future Signed-off-by: vitsai <victoria@anyscale.com> Remove top-level changes Signed-off-by: vitsai <victoria@anyscale.com> comments Signed-off-by: vitsai <victoria@anyscale.com> Peel back everything that's not grpc-layer changes Signed-off-by: vitsai <victoria@anyscale.com> Change atomic to mutex Signed-off-by: vitsai <victoria@anyscale.com> Fix alignment of SafeClusterID Signed-off-by: vitsai <victoria@anyscale.com> comments Signed-off-by: vitsai <victoria@anyscale.com> Add back everything in GCS server except RPC definition Signed-off-by: vitsai <victoria@anyscale.com> fix bug Signed-off-by: vitsai <victoria@anyscale.com> comments Signed-off-by: vitsai <victoria@anyscale.com> comments Signed-off-by: vitsai <victoria@anyscale.com> Add client-side stuff Signed-off-by: vitsai <victoria@anyscale.com> hack workaround to simulate async direct dispatch love when things hang Signed-off-by: vitsai <victoria@anyscale.com>
Signed-off-by: vitsai <victoria@anyscale.com>
Signed-off-by: vitsai <victoria@anyscale.com>
Add support for a GetClusterId RPC call in the GCS server that clients can use to obtain the cluster ID. In particular, GCS server will retrieve the cluster id from the persistent store if it exists, or otherwise generate a new one and store it. Previous PR (GRPC client): #36526 Next PR (GCS client): #35014 Part 3 of breaking down #35014 into more digestible parts.
This change adds a Cluster ID to GRPC client (client call, client call manager), and attaches it to the metadata of each ClientCall provided it is non-nil. Previous PR (GRPC server): ray-project#36517 Next PR (GCS server): ray-project#36535 Part 2 of breaking down ray-project#35014 into more digestible parts. Related issue number ray-project#34763 Signed-off-by: e428265 <arvind.chandramouli@lmco.com>
Add support for a GetClusterId RPC call in the GCS server that clients can use to obtain the cluster ID. In particular, GCS server will retrieve the cluster id from the persistent store if it exists, or otherwise generate a new one and store it. Previous PR (GRPC client): ray-project#36526 Next PR (GCS client): ray-project#35014 Part 3 of breaking down ray-project#35014 into more digestible parts. Signed-off-by: e428265 <arvind.chandramouli@lmco.com>
Retrieve the token from the GCS server in the GCS client while connecting, to attach to metadata in requests.
Previous PR (GCS server): #36535
Next PR (auth): #36073
Why are these changes needed?
Related issue number
#34763
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.