-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SSE support to Core HTTP APIs #198913
Comments
Pinging @elastic/kibana-core (Team:Core) |
As we have also discussed offline, one risk here is when not using HTTP/2 we have a limited number of connections to also manage which a Core owned implementation could help manage:
--- Excerpt from MDN |
I'm not 100% sure if we've stayed consistent in this. But we have traditionally tried to keep the API surface of Core as small as possible by avoiding leaking large API surfaces like an Observable (not holding my breath that it would ever get added to ECMAScript). Another way to achieve this could be to expose a stream that can then be very easily turned into an observable with |
Looking at the code examples, it feels like a helper package to convert the response to SSE would suffice. WDYT? |
If we just want to factorize the tooling, then an helper package could be sufficient. If we want to go further and create some kind of "SSE event bus" so that with http1, only one connection is kept open when multiple "endpoints" want to stream SSE (basically what bfetch was doing, and what @jloleysens is suggesting - I think - in his prior comment), then it's more work, and we'll for sure need some better integration with our APIs. I'd also like to mention that "helper packages" have a very bad discoverability. We already have some |
I'm not too concerned about having more adoption (even though it could be a problem) bc I doubt users will use these features simultaneously. It seems more realistic that a single feature could be opened in many tabs. But I take your point, perhaps this is premature, I just don't want to be caught off guard by it. The "right" solution will require a service worker... but I feel like even if we can have a simple limit mechanism in place it could help. For example: when we open an SSE connection we set a counter in |
Now that http2 will be on by default and http1 is deprecated for 9.x #204384 I think the risk of exceeding the available sockets goes down a lot. Ideally we should still fail gracefully before allowing all available sockets to be consumed by SSE. But unless we know of a short term need to keep many SSE sockets open in the background, doing an SSE "bus" seems unnecessary. |
Possibly related: #206352 |
Right now, if SSE between the Kibana server and browser can be technically achieved, implementing it is very tedious and "manual".
On the server-side, it was basically only made possible by the fact that we can send a
stream
as response's body. The conversion to any kind of "observable" or "event emitter" to an SSE-compatible ("event source") stream must be implemented by the endpoint owner.E.g.
kibana/x-pack/plugins/inference/server/routes/chat_complete.ts
Lines 141 to 146 in 253cf6e
Same goes for the browser side: the SSE stream can be handled due to the
asResponse
andrawResponse
parameter of our fetch service, but there's no tool out of the box to convert the stream back to some kind of typed observableE.g.
kibana/x-pack/plugins/inference/public/chat_complete.ts
Lines 43 to 49 in 1bd42ad
Ideally, Core HTTP APIs should have better / native support for SSE endpoints, which would make DX around SSE better, in addition to allow the encode/decode logic to be factorized in a single, platform, place.
The text was updated successfully, but these errors were encountered: