-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide API to refresh token after authc lifecycle #108346
Comments
Pinging @elastic/kibana-core (Team:Core) |
Q: Is this an HTTP Service If Elasticsearch client-only, shouldn't this re-authentication mechanism be implemented on the client-level |
Yes, in the Elasticsearch service. Maybe we should decouple
Could you elaborate on the benefits of this approach? Right now the es client doesn't know about the |
After syncing offline, I noticed I was missing some context. Dismiss my comment 😇 |
Just for my understanding, what does security plan to do with this? Is it just for logging / audit purpose? |
In case we detect that this |
@mshustov what this part of the initial scope? |
@pgayvallet yes, in the issue title: |
I tried to take a look at how we could implement this behavior. I need to remind that actually, the scoped clients we provide to our consumers are, even if cloaked under our own interface, vanilla Atm, the only two options I see to have this try/catch401/retry behavior would be:
@delvedor do you have any other suggestions on how/where we could perform this request retry logic? Also, note that if we do need to update ATM, the headers are passed during the instantiation of the kibana/src/core/server/elasticsearch/client/cluster_client.ts Lines 79 to 84 in 3c8fa52
@delvedor correct me if I'm wrong, but there's not way to update a Maybe keeping a reference of Also, ideally, we would want reactive updates of the other clients using the same request, which will be even harder to achieve. e.g const client1 = clusterClient.asScoped(request);
const client2 = clusterClient.asScoped(request);
// causing the 401 scenario because of expired access token
const responseReturnedAfterRetry = await client1.search({});
// client2 should ideally already be updated with the new headers and avoid to retrigger the 401
await client2.search({}); As a side note, I think that the @elastic/kibana-security implementation would also need to be smart regarding this refresh logic, and be able to identify that a request is currently in this retry stage, e.g const client1 = clusterClient.asScoped(request);
const client2 = clusterClient.asScoped(request);
// the refresh logic and related ES call would ideally only be performed once
await Promise.all([client1.search({}), client2.search({})]); |
While refreshing the same access token multiple times isn't ideal, it's not critical either since Elasticsearch will be returning exactly the same access/refresh token pair during 60s refresh window. In fact, we do this quite often at the authentication stage already, when Kibana UI makes multiple requests at the same time that trigger access token refresh. Having said that, I agree that there will be enough corner cases Security plugin will need to carefully handle. |
A custom transport is the best option.
Custom headers are stored in the transport instance, you can update them by using the headers symbol: import * as http from 'http'
import { Transport } from '@elastic/elasticsearch'
import { kHeaders } from '@elastic/elasticsearch/lib/symbols'
class KibanaTransport extends Transport {
updateHeaders (newHeaders: http.IncomingHttpHeaders): void {
this[kHeaders] = { ...this[kHeaders], ...newHeaders }
}
}
// or
const newHeaders: http.IncomingHttpHeaders = { ... }
clientInstance.transport[kHeaders] = { ...clientInstance.transport[kHeaders], ...newHeaders } The code above should work, but you are tampering with internals and it might break in the future. The code above will work for the v8 client, in the v7 client you can access the headers directly with |
Overall I agree. It wouldn't make much sense to add an API specifically to update only the headers imho. We could imagine an API to update the client's config more globally, but the complexity for some of the options would make it very complicated, and it would be hard to justify it compared to asking consumers to just recreate a new instance of the Client with the updated config. The issue we have with our specific usage in Kibana is that we need a way for the instances of the client we expose to our API consumer to refresh these header values internally, as we can't predict when we'll need to update them (and we can't just ask consumers to create a new instance of the client in case of such 401, the whole intent of the issue is to manage that internally). So I guess our only technical options for this are to either:
Not fan of either of those, but if the client ensures that this symbol hack will work by adding unit tests, it's probably easier to go this way rather than reopening the client wrapper/proxy discussion? Now, regarding the changes required in ATM kibana/src/core/server/http/auth_headers_storage.ts Lines 23 to 28 in 338fe1a
I guess our only option here is to have it return an observable instead, something like export type GetAuthHeaders = (request: KibanaRequest) => Observable<AuthHeaders> | undefined;
/** @internal */
export class AuthHeadersStorage {
private authHeadersCache = new WeakMap<Request, BehaviorSubject<AuthHeaders>>();
public set = (request: KibanaRequest | Request, headers: AuthHeaders) => {
const rawRequest = ensureRawRequest(request);
if(!this.authHeadersCache.has(rawRequest)) {
this.authHeadersCache.set(rawRequest, new BehaviorSubject(headers));
} else {
this.authHeadersCache.get(rawRequest)!.next(headers);
}
};
public get: GetAuthHeaders = (request) => {
return this.authHeadersCache.get(ensureRawRequest(request));
};
} Then we'll want to adapt My concerns:
kibana/src/core/server/elasticsearch/client/cluster_client.ts Lines 79 to 85 in 3c8fa52
@delvedor correct me if I'm wrong, but if we go with this Btw, are two clients instanciated via const scopedClient = this.rootScopedClient.child({
headers: scopedHeaders,
}); sharing the same
Not saying this should be a reason not to try to do it, but just mentioning. |
FWIW, I'm going to start a POC to identify potential other blockers. |
@delvedor, // node_modules/@elastic/elasticsearch/index.d.ts
export * from '@elastic/transport' Accessing the Note that this is also error prone, because if the version we have as a direct dependency and the version used by E.g by adding I had to align the explicit version of the transport with the implicit one. Do you think it would be alright to expose those symbols (and the |
Now for @elastic/kibana-security: I'm a bit lost with the scope of this issue. I initially thought the responsibility of the hook would be to return the updated authentication headers, and that core's responsibility with them would only be to:
However when looking at #104893 (comment), I'm kinda scared. Are we supposed to allow the hook to return values for the whole authentication state ( kibana/src/core/server/http/http_server.ts Lines 436 to 454 in 755950a
Because doing so would require to totally de-couple Also the current authc API kibana/src/core/server/http/lifecycle/auth.ts Lines 147 to 151 in 7d2c4d4
Allows things (such as redirecting the user, or returning a We'll probably need a better specification of what exactly you had in mind regarding the hook API and what core has to do with its response. Also this part of the original specification bothers me:
|
@pgayvallet every child client has its own Transport instance.
It will not cause any issue with the connection pool.
@pgayvallet you are right, symbols are not exported as they should not be part of the public API. While TransportOptions should be re-exported, I'm a bit on the fence about exposing the symbols, since they are not part of the public API. |
The design POC proposed in #120677 has been validated as being sufficient for the needs of @elastic/kibana-security, and the approach did not receive any blocking feedback, so I guess this settles the design phase. The implementation phase has been prioritized into our 'next sprint' queue, so we should start it in the next few weeks |
Security has to deal with a growing number of cases when authc credentials expired and request to Elasticsearch failed with 401 after
onAuth
hook. Since only Core has access to all the calls to the Elasticsearch service, it should provide an API for Security plugin that would allow it to handle this periodic authentication issue.Blocks: #104893 (see the issue description for the current and proposed flow diagrams)
The required functionality is listed in #104893 (comment)
On top of it, we might need to refactor
AuthHeadersStorage
and Elasticsearch service to support auth credentials to be updated afteronAuth
stage.The text was updated successfully, but these errors were encountered: