Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SIP-39] Global Async Query Support #9190

Closed
robdiciuccio opened this issue Feb 21, 2020 · 28 comments
Closed

[SIP-39] Global Async Query Support #9190

robdiciuccio opened this issue Feb 21, 2020 · 28 comments
Labels
change:backend Requires changing the backend sip Superset Improvement Proposal

Comments

@robdiciuccio
Copy link
Member

robdiciuccio commented Feb 21, 2020

[SIP-39] Global Async Query Support

Motivation

  • Loading time on dashboards with many charts is excessive due to the number of concurrent connections to the server.
  • Synchronous connections left open waiting for result data to be returned block other requests to the server.
  • Browsers limit the number of concurrent connections to the same host to six, causing bottlenecks.
  • Some of this bottleneck is currently mitigated via the dashboard tabs feature, which defers loading of charts in background tabs. This lazy-loading could also be applied to charts outside of the viewport.
  • There is a proposal for SQL Lab to move away from tabs in the UI, altering the current async query polling mechanisms.
  • Provide a standardized query interface for dashboards, charts and SQL Lab queries.

Proposed Change

Provide a configuration setting to enable async data loading for charts (dashboards, Explore) and SQL Lab. Instead of multiple synchronous requests to the server to load dashboard charts, we issue async requests to the server which enqueue a background job and return a job ID. A single persistent websocket connection to the server is opened to listen for results in a realtime fashion.

Websockets via a sidecar application

Pros

  • Excellent websocket browser support
  • Realtime synchronous communication channel
  • Browser connections limits not an issue (255 in Chrome)
  • Supports bi-directional communication (possible future use cases)
  • Authentication/authorization via initial HTTP request that is then upgraded to a WSS connection.

Cons

  • Minor modifications to proxies, load balancers to support websocket connections
    • Sticky sessions not required at the load balancer if the reconnection story is sound
  • Requires persistent connection to the server for each tab
  • AWS "Classic" ELB does not support Websockets. Requires use of NLB (TCP) or ALB in AWS environments.

Superset async query arch - Websockets v2 (2)

Approach

Each open tab of Superset would create a unique "channel" ID to subscribe to. A websocket connection is established with the standalone websocket server as an HTTP request that is then upgraded to wss:// if authentication with the main Flask app is successful. Requests for charts or queries in SQL Lab are sent via HTTP to the Flask web app, including the tab's channel ID. The server enqueues a celery job and returns a job ID to the caller. When the job completes, a notification is sent over the WSS connection, which triggers another HTTP request to fetch the cached data and display the results.

Why a separate application?

The current Flask application is not well suited for persistent websocket connections. We have evaluated several Python and Flask-based solutions, including flask-SocketIO and others, and have found the architectural changes to the Flask web app to be overly invasive. For that reason, we propose that the websocket service be a standalone application, decoupled from the main Flask application, with minimal integration points. Superset's extensive use of Javascript and the mature Node.js websocket libraries make Node.js and TypeScript (SIP-36) an obvious choice for implementing the sidecar application.

Reconnection

The nature of persistent connections is that they will, at some point, disconnect. The system should be able to reconnect and "catch up" on any missed events. We evaluated several PubSub solutions (mainly in Redis) that could enable this durable reconnection story, and have determined that Redis Streams (Redis ≥ 5.0) fits this use case well. By storing a last received event ID, the client can pass that ID when reconnecting to fetch all messages in the channel from that point forward. For security reasons, we should periodically force the client to reconnect to revalidate authentication status.

Why not send result data over the socket connection?

While it is possible to send result data over the websocket connection, keeping the scope of the standalone service to event notifications will reduce the security footprint of the sidecar application. Fetching (potentially sensitive) data will still require necessary authentication and authorization checks at load time by routing through the Flask web app. Sending large datasets over the websocket protocol introduces potential unknown performance and consistency issues of its own. Websockets are not streams, and "the client will only be notified about a new message once all of the frames have been received and the original message payload has been reconstructed."

Query Cancellation

Queries may be "cancelled" by calling the /superset/stop_query endpoint in SQL Lab, which simply sets query.status = QueryStatus.STOPPED for the running query. This cancellation logic is currently implemented only for queries running against hive and presto databases. Queries that have been enqueued could be cancelled prior to executing the query by adding a check in the celery worker logic. It is also possible to revoke a Celery task, which will skip execution of the task, but it won’t terminate an already executing task. Due to the limited query cancellation support in DB-API drivers, some of which is discussed here, comprehensive query cancellation functionality should be explored in a separate SIP. That said, query cancellation requests may still be issued to the existing (or similar) endpoint when users intentionally navigate away from a dashboard or Explore view with charts in a loading state.

Query deduplication

The query table in the Superset metadata database currently includes only queries run via SQL Lab. Adapting this for use with dashboards and charts may have a larger impact than we're willing to accept at this time. Instead, a separate key-value store (e.g. Redis) may be used for tracking and preventing duplicate queries. Using a fast KV store also allows us to check for duplicate queries more efficiently in the web request cycle.

Each query issued to the backend can be fingerprinted using a hashing algorithm (SHA-256 or similar) to generate a unique key based on the following:

  • Normalized SQL query text
  • Database ID
  • Database schema

Prior to executing a query, a hash (key) is generated and checked against a key-value store. If the key does not exist, it is stored with a configured TTL, containing an object value with the following properties:

  • Query state (running, success, error)
  • Query type (chart, sql_lab)
  • Result key (cache key)

Another key is created to track the Channels and Job IDs that should be notified when this query completes (e.g. <hash>_jobsList[ChannelId:JobId]). If a duplicate query is issued while currently running, the Job ID is pushed onto the list, and all relevant channels are notified via websocket when the query completes. If a query is issued and a cache key exists with state == "success", a notification is triggered immediately via websocket to the client. If queries are "force" refreshed, query deduplication is performed only for currently running queries.

New or Changed Public Interfaces

  • New configuration options
  • Additional API endpoints (see Migration Plan, below)

New dependencies (optional)

  • Redis ≥ 5.0 (pubsub, Redis Streams) or alternative
  • Node.js runtime for websocket application

Migration Plan and Compatibility

Asynchronous query operations are currently an optional enhancement to Superset, and should remain that way. Configuring and running Celery workers should not be required for basic Superset operation. As such, this proposed websocket approach to async query operations should be an optional enhancement to Superset, available via a configuration flag.

For users who opt to run Superset in full async mode, the following requirements will apply under the current proposal:

  • Chart cache backend
  • SQL Lab results backend
  • Pubsub backend (current recommendation: Redis Streams)
  • Running the sidecar Node.js websocket application

Browsers that do not support websockets (very few) should fallback to synchronous operation or short polling.

Migration plan:

  • Introduce new configuration options to enable asynchronous operation in Superset. Pass these values to the frontend as feature flags (as appropriate)
  • Introduce new API endpoints in Superset's Flask web application for async query/data loading for charts and queries
  • Introduce new API endpoints in Superset's Flask web application required for interaction with the sidecar application
  • Build and deploy the Node.js websocket server application
  • Build and deploy async frontend code to consume websocket events, under feature flag

Rejected Alternatives

SSE (EventSource) over HTTP/2

Server-Sent Events (SSE) streams data over a multiplexed HTTP/2 connection. SSE is a native HTML5 feature that allows the server to keep the HTTP connection open and push data to the client. Thanks to the multiplexing feature of the HTTP/2 protocol, the number of concurrent requests per domain is not limited to 6-8, but it is virtually unlimited.

Pros

  • Single, multiplexed connection to the server (via HTTP/2)
  • Events pushed to the client from the server in realtime
  • Supports streaming data and traditional HTTP authentication methods
  • Minor adjustments to proxy, load balancers
  • Can use async generators with Flask & gevent/eventlet (untested)

Cons

  • HTTP/2 not supported in some browsers (IE)
  • SSE requires polyfill for some browsers
  • Requires persistent connection to the server for each tab
  • HTTP/2 requires TLS (not really a con, per se)
  • If SSE is used without HTTP/2, persistent SSE connections may saturate browser connection limit with multiple tabs
  • AWS "Classic" ELB does not support HTTP/2. ALB supports HTTP/2, but apparently converts it to HTTP/1.1 upstream (AWS-specific)
  • Does not support upstream messaging (from the client to the server)

NOTE: HTTP/2 multiplexing could still potentially be valuable alongside the websocket features, and should be investigated further.

Long polling (aka Comet)

Pros

  • Excellent browser support (a standard HTTP request)

Cons

  • Requires long-lived connections
  • Browser limit on concurrent connections to the same domain (6-8) results in connection blocking (HTTP/1.1)

Short polling

Pros

  • Excellent browser support (a standard HTTP request)
  • It's the current solution for async queries in SQL Lab

Cons

  • Browser limit on concurrent connections to the same domain (6-8) results in connection blocking (HTTP/1.1)
  • Header overhead: every poll request and response contains a full set of HTTP headers
  • Auth overhead: each polling request must be authenticated and authorized on the server

Thanks to @etr2460 @suddjian @nytai @willbarrett @craig-rueda for feedback and review.


Update 2020-03-10

Per the below discussion, an abstracted interface will be used on the client in order to support transport mechanisms other than Websockets and the proposed sidecar application. The final form of this abstraction will take shape during implementation, but the goal will be to have UI elements interact with the generic interface, while configuration will determine which transport is used underneath.

@etr2460 etr2460 added sip Superset Improvement Proposal change:backend Requires changing the backend labels Feb 22, 2020
@DiggidyDave
Copy link
Contributor

WebSockets are pretty heavy-handed here, in my opinion. Short polling works well and is very load balancer friendly. The "con" listed above due to connection limits needn't be an insurmountable problem since these are quick polling calls (or am I missing something?).

I'm pretty concerned about the practical implications of this deploying in corporate environments. Routing HTTP is a well-understood, easy-to-scale problem that is most-likely to play nicely with any environment folks are deploying in and won't require infra changes to LBs etc which are not always under the control of the Superset "owners". I'm just not seeing the ROI on this vs keeping it simple.

@etr2460
Copy link
Member

etr2460 commented Feb 24, 2020

We've seen significant load on both webservers and the metadata database managing the SQL Lab short polling requests. We're quite concerned that if we add short polling for every chart in a dashboard it would have a bunch of performance implications.

One option might be to do a bulk short poll on dashboards, making a single short poll to track all the in flight chart requests.

Also, we've seen issues around the short polling in SQL Lab where if any short poll fails, then the entire query is marked as failed on the front end. I wonder if the reliability of short polling makes it less ideal here than using websockets that reconnect when dropped

@DiggidyDave
Copy link
Contributor

The bulk polling could make sense to reduce load, but my bigger point here is that scaling a fleet of HTTP servers (or even routing a particular endpoint to a distinct dedicated fleet) is a pretty simple problem that can be accomodated in most environments, whereas websockets and sidecars have a lot of interesting problems in that area and may not even be feasible in some environments.

For the last point, that sounds like a front-end bug, not a problem inherent with small HTTP GET requests for short polling. Polling is much simpler in terms of the tech involved (and can be stateless), so I am skeptical that small front-end bugs will cease to be a problem if we adopt a more complicated technology to replace the foundation.

And to be clear, I'm not meaning to dismiss the idea of websockets, I just want to make sure that the simpler alternative is discussed and the trade-offs.

It also might be worth discussing the relative strength of the "cons" for SSE. "Some browsers" don't support it, which means the subset of the IE11 users (1.55% globally) that is not on Windows 10, as well as Opera.

@robdiciuccio
Copy link
Member Author

Short polling is an option, but it adds considerable load to the metadata database at scale to authenticate and check permissions for each pending object in the polling request(s). If the user has a number of open tabs, we then have each client potentially hammering the metadata DB pretty hard. Contrast this with the websocket option (or SSE+HTTP/2), which requires a single authentication action upon connect, and a single authorization for each result set only when fetched.

Websockets are more work to setup at the infrastructure level, though there are similar concerns with enabling HTTP/2 in many load balancers, without which, SSE is not a viable option. My initial inclination when drafting this SIP was to recommend SSE+HTTP/2 rather than websockets, but the Flask app is not well suited for persistent connections, making a sidecar app more feasible. Websockets are also arguably more ubiquitous for realtime communication at the client at this point. With regard to scaling, the fact that sticky sessions are not required due to the reconnection strategy allows for flexible horizontal scaling of the websocket sidecar app, and there are several patterns for load balancing websocket servers. Are there specific scaling or implementation concerns that we should address?

Async query support is currently optional in Superset, and should remain so, IMO. The async solution we agree upon should be a balance of performance and feasibility, but we should consider short polling for a fallback if websockets are not available for whatever reason.

@DiggidyDave
Copy link
Contributor

For the metadata/auth load problem, could we just issue short-lived tokens for the session with cache to remove the need to hammer that database? (Decomposition into simple problems with simple solutions may be preferable)

Last pushback on the bigger idea 😁: Websockets have their uses--for example, I've found them very useful in the past for things like streaming 60hz point cloud updates and telemetry data--but ultimately they are an RTC protocol and that is really not the situation we have here, and to me this feels like overkill. I concede it will get the job done, but I fear it will be unnecessarily "expensive", it will introduce complexity, it will introduce a host of new bugs and other fallout, and I think the infra/ops requirements may lead to reduced adoption in environments where there is friction to messing with LBs etc.

I think I've said my 2 cents on that... looking forward, if the community agrees to go with WebSockets i would ask that the SIP be modified to account for these concerns and keep it enterprise friendly. A couple of requests for consideration:

  • consider separating the dependency on websockets' interface from the transport layer using something like socket.io, which has the nice property of being configurable to use the polling transport layer and avoiding the use of actual websockets
  • reconsider using flask-socketio to avoid need for sidecar: the SIP doesn't elaborate on why it is too invasive, but in my experience using flask-socketio it is very clean and is as simple as adding a route in flask to expose a websocket endpoint. If architected thoughtfully (with some very light configuration on top), having flask endpoints exposing the websockets on top of socket.io would give users the flexibility to use a sidecar or not, and to use websockets or polling at their discretion.

@robdiciuccio
Copy link
Member Author

Socket.io defaults to long-polling, upgrading to websockets if available. Using Socket.io without websockets would almost certainly saturate the browser's HTTP connection limit with just a few tabs open (without HTTP/2). This was the main reason behind recommending vanilla websockets vs Socket.io.

I'm interested in hearing more about your experience with flask-socketio. Are you running this with gunicorn and gevent? This setup is the current recommendation in the Superset docs, but there are apparent incompatibilities with database drivers and green threads. This lack of clarity around green thread support in Superset is another reason for recommending handling persistent connections in a separate application. I'd also be interested in hearing if others are using gevent or eventlet successfully in production.

@DiggidyDave
Copy link
Contributor

DiggidyDave commented Feb 26, 2020

I'm sure you've seen all of this @robdiciuccio but leaving it here for the general audience, it touches on a lot of the tradeoffs discussed here: https://moduscreate.com/blog/fast-polling-vs-websockets-2/

I had forgotten that the polling transport used long polling. :-/ I think there are some options you can configure on the transport, but I'm less confident now that short-polling with socket.io is an option. When I used it previously, we were using websockets (not long polling) on gunicorn and gevent but we were not using any datasources so I can't speak to that specifically. But we did have to actually write code on the server side to manage some concurrency issues (like explicitly sleeping to yield the processor in certain places to keep things moving) so it might make sense that some db drivers are not out-of-the-box compatible if running in the same process as the socket server.

I know I said I had said my 2 cents, but a couple more questions came to mind (sorry!):

  • are "many tabs" really a problem? do we need to be polling/streaming events constantly for inactive tabs? If only the 1-3 (for power users with big desktops with side by side windows) active tabs a user will have are the ones we care about, could we only actively short-poll on those active tabs (using page visibility apis)? upon switching tabs, a quick single, first poll request will see if results have come in for the new tab and cause them to be immediately fetched
  • if the above is true: if polling is active only for active tabs, and batched for dashboards, what are the remaining problems that require websockets?

@willbarrett
Copy link
Member

willbarrett commented Mar 3, 2020

Hi @DiggidyDave I want to chime in here. I don't think short-polling is a good option architecturally. Short-polling necessarily introduces latency as the system waits between polling intervals. The lower the latency desired in the system, the greater the load placed on the server to answer short-poll requests. Maintaining an open websocket connection solves this problem.

To respond to your question on tabs, yes, I think "many tabs" is a problem. Our user research indicates that a small number of power-users create the majority of content inside of organizations. These users tend to have job titles like "business analyst", and for them a large number of open tabs is the norm. For this reason I would say we want non-active tabs to be actively updated. Think of a situation where you fire a query against a data warehouse in one tab, then switch away to work on something else. Ideally, there would be some manner of notification on update to let the user know that the query has finished. This is possible with active websockets without increasing server overhead. With short-polling, we can DDOS ourselves from inactive tabs but if we disable short-polling on inactive tabs we're going to negatively impact the user experience.

@DiggidyDave
Copy link
Contributor

Those are valid points, but quick alternative solutions:

  • The latency tolerances we are talking about here are "fluid". Polling can initially be sub-second and backoff as execution time increases... someone waiting for a 10 minute query isn't going to notice 1-2 seconds of poll interval.
  • inactive tabs can be rolled into batch polling with ephemeral (not structural) localStorage to let only the active tab poll for a batch of active queries... polling operations should be very lightweight on the server side and lean heavily on cache.

Like I said, I agree websockets will work. I'm just afraid it will cause a lot of complexity, instability, and will cause reduced adoption, in exchange for little or no benefit over simpler approaches.

@willbarrett
Copy link
Member

Feedback on the alternate solution:

  • Sub-second polling falls down when latency between client and server increases. The polling implementation gets increasingly complex as we try to meet a similar level of service as what we would get from websockets.
  • Transitioning polling between tabs and tracking state for all of the tabs adds substantial complexity to the front-end which is unnecessary with a single websocket connection per tab.
  • By using short-polling, we may have slightly decreased the complexity of deployments, but conversely we have increased the complexity of the front-end. I would prefer to see more complexity server-side where issues are more visible vs. on the front-end, where issues can be harder to track.

@DiggidyDave
Copy link
Contributor

To the first bullet: I don't disagree at all that websockets have the potential to outperform polling w.r.t. frontend-to-backend latency (assuming the backend--which is, of course, polling something, somewhere--is written efficiently). What I am questioning is whether reducing the extra 500-1000ms max of latency for long-lived queries (and 250-500ms for short-lived ones) on non-active tabs is actually a requirement (I obviously think it is not) and whether it is worth complicating and destabilizing Superset over it.

OK, I genuinely think I've said my peace now :-) Thanks for the good faith back-and-forth here. I think its healthy.

@williaster
Copy link
Contributor

williaster commented Mar 3, 2020

New dependencies (optional) Node.js runtime for websocket application

What are the expected node versions to be supported?

I also don't see any discussion about the migration plan/implications for the @superset-ui client which is used throughout the app, and only supports http fetching at the moment, can you provide details for the plan there? How does a http => wss upgrade work with fetch? Will the wss client be added to @superset-ui?

Additionally, we rely heavily on the @superset-ui client for embedding charts in other applications which currently relies on CORS. Can you discuss the implications/interactions between CORS and wss?

@robdiciuccio
Copy link
Member Author

@DiggidyDave The use cases for long-running queries in SQL Lab and loading dashboards in a performant manner feel like very different use cases. Polling could (and currently does) satisfy the SQL Lab use case, though there is significant room for improvement there. The latency introduced by polling in dashboards would be a much larger impact to user experience.

Agreed that this discussion is healthy! Architecture proposals should be actively debated and scrutinized so we all benefit from better informed decision-making.

@williaster @superset-ui will need to be upgraded to also support WebSocket connections. This upgrade should be backwards compatible and configured via feature flags. CORS does not apply to WebSockets, but securing the WS connections by validating origin domain is recommended, so we should keep the cross-domain use case in mind during implementation of these security restrictions.

@robdiciuccio
Copy link
Member Author

We're going to take another look at using Socket.io in order to support environments where websockets are not an option.

@DiggidyDave
Copy link
Contributor

Much appreciated. FWIW here is a bit about bypassing the intial/default logpolling connection (which it usually establishes first as a fallback): https://socket.io/docs/client-api/#With-websocket-transport-only

@robdiciuccio
Copy link
Member Author

@DiggidyDave Socket.io requires sticky sessions at the load balancer in order to properly perform long-polling: https://socket.io/docs/using-multiple-nodes/. Not sure if this is possible in your environment?

@DiggidyDave
Copy link
Contributor

Interesting, I'm not actually sure about that, I'd have to dig into it. :-/ The main thing I think we care about is any client-side interface between the business logic and raw native websockets, to reserve that option to swap out the impl with a short-polling approach. (socket.io is just a super popular wrapper that happens to have the transport abstraction built-in, as is https://github.com/sockjs/sockjs-client and others)

There are other options that are not coupled with a server architecture, like this one that is a small client-side wrapper: https://github.com/lukeed/sockette

Or this one that is a promise-based wrapper (obvs, a polling impl could fulfill promises just as easily as far as client code is concerned): https://github.com/vitalets/websocket-as-promised

Or anything else. As long as websockets are behind an interface of some kind there will be a path to unblock environments that can't use ws.

Something as simple as this wrapping ws would be perfectly fine:

abstract class ServerClientEventBus {
    abstract postMessage(...): void;
    abstract receiveMessage(...): void;
}

class WSSEventBus : extends ServerClientEventBus {
   // etc...
}

Anyway, I appreciate you looking at that.

@robdiciuccio
Copy link
Member Author

@DiggidyDave the abstraction approach sounds good, as it appears that Socket.io is not really appropriate for this use case. I've updated the body of the SIP with notes on the client-side abstraction.

@rmgpinto
Copy link

When I enable GLOBAL_ASYNC_QUERIES, all native filters on dashboards yield No Results

@villebro
Copy link
Member

Thanks for reporting @rmgpinto - we'll put this on the list of issues to fix. Ping @junlincc

@robdiciuccio
Copy link
Member Author

The vote for this SIP PASSED with 5 binding +1, 3 non binding +1 and 0 -1 votes on 3/21/2020

@villebro
Copy link
Member

villebro commented Jul 3, 2024

With this SIP still being behind an experimental feature flag, and not actively maintained, I've been thinking about ways we could simplify the architecture, and finally make this generally available in a forthcoming Superset release. Specifically, I found that the websocket implementation didn't significantly improve the UX compared to the polling solution. In retrospect, I feel most of @DiggidyDave 's comments turned out to be true - the solution ended up becoming too complex, and didn't gain critical adoption within the community. However, the feature is still as relevant today as it was when this SIP was opened, and I think stabilizing this feature is very important is because Superset's current synchronous query execution model causes lots of issues:

  • If many people open the same chart/dashboard at the same time, both will execute a query to the underlying database, due to no locking of queries
  • if a user refreshes a dashboard multiple times, they can quickly congest the downstream database with heavy queries, both eating up webserver threads and database resources.
  • In some cases, the web worker threads/processes get blocked waiting for long running queries to complete executing, making it impossible to effectively scale web worker replica sets based on CPU consumption. By moving queries to async workers it should become possible to get by with much slimmer webworker replica sets. Furthermore, async workers could be scaled up/down based on the queue depth.

To simplify the architecture and reuse existing functionality, I propose the following:

  • The websocket architecture is removed, as it adds a lot of complexity to the architecture - in the future only polling would be supported.
  • The concept of a "query context cache key" is removed in favor of only a single cache key, i.e. the one we already use for chart data.
  • When requesting chart data, if the data exists in the cache, the data is returned normally.
  • When chart data isn't available in the cache, only the cache_key is returned, along with additional details: when the most recent request has been submitted, status (pending, executing) etc.

The async execution flow is changed to be similar to SQL Lab async execution, with the following changes:

  • when the async worker starts executing the query, the cache key is locked using the KeyValueDistributedLock context manager. This means that only a single worker executes any one cache key query at a time.
  • To support automatic cancellation of queries, we add a new optional field "poll_ttl" to the query context, which makes it possible to automatically cancel queries that are not being actively polled. Every time the cache key is polled, the latest poll time is updated on the metadata object. While executing, the worker periodically checks the metadata object, and if the poll_ttl is defined, and if the last poll time exceeds the TTL, the query is cancelled. This ensures that if a person closes a dashboard with lots of long running queries, the queries are automatically cancelled if nobody is actively waiting for the results. By default, frontend requests have poll_ttl set to whichever value is set in the config (DEFAULT_CHART_DATA_POLL_TTL). Cache warmup requests would likely not have a poll_ttl set, so as to avoid unnecessary polling.
  • To limit hammering the polling endpoint, we introduce a customizable backoff function in superset_config.py, which makes it possible to define how polling backoff should be implemented. The default behavior would be some sort of exponential backoff, where freshly started queries are polled more actively, and queries that have been pending/running for a long time are polled less frequently. When the frontend requests chart data, the backend provides the recommended wait time in the response based on the backoff function.

Some random thoughts:

  • Currently multi-query query contexts get executed serially. With this new approach the queries can be executed in parallel, as each query is dispatched separately.
  • I feel synchronous execution is very problematic in the context of Superset due to the problems described in the intro of this post. We may want to make async execution the only available execution model at some point. To not make Redis a mandatory component, we may need expand the current scope of the Metastore Cache so that it can be used for scheduling Celery tasks. I assume this will require adding Kombu messaging support to it.
  • It could be a good idea to have a table in the metastore for currently pending/executing queries that Admins could access via a dedicated menu. This way there would be a way to kill queries directly via the Superset UI. Note, that query cancellation probably needs to be implemented per db engine spec.

I assume we need a new SIP for this, but I wanted to drop this comment here to get initial feedback.

@michael-s-molina
Copy link
Member

Thank you for the comment @villebro. I really like the idea of removing extra layers and reusing existing features such as KeyValueDistributedLock and our Metastore Cache.

We may want to make async execution the only available execution model at some point.

I think this point ☝🏼 would be essential/required as the result of this work. We need to reduce complexity and only keep solutions that are maintained.

I assume we need a new SIP for this, but I wanted to drop this comment here to get initial feedback.

Yep. We definitely need a SIP to discuss the details.

@rusackas
Copy link
Member

rusackas commented Jul 3, 2024

Agreed with Michael on all points. And I'm very thankful for all of your insights and input here. I'd be SO excited to see this feature mainstream, and I think both the UX will improve as well as the infra scenario.

Is there any down side to just making Redis a required component, if that simplifies things further?

Also, just to play devil's advocate on the removal of websockets, there are a few distinct advantages I kind of dream of:

  1. We wouldn't have to have any sort of polling backoff OR hammer a polling API. Results would always come in sooner rather than later, improving the UX.
  2. Having a websocket that's made aware of chart data endpoints opens numerous other doors... whether it's the redis result cache being invalidated/updated ("there's fresh data for your query, come get it!"), or even the possibility of getting much more frequent data, i.e. realtime events streaming from Kafka/Influx/etc.
  3. Having a websocket connection also provides a door to solve a variety of other UX problems we have, e.g. token expiration awareness, or potentially notifications about anything (reports being sent, chat messages, schema updates, dashboards being updated while you're looking at them, etc).

At some point, I think we'll want any or all of the above, so we'll want to revisit the idea of having a websocket solution in place. If this is not the time, so be it... mainstreaming GAQ would be a clear priority. But if there's anything worth keeping/shelving here for a future effort, it seems potentially worth it.

@betodealmeida
Copy link
Member

@rusackas I think we could use Server-sent events for those use cases, since all those use cases are unidirectional. It's a simpler architecture than Websockets.

One thing I'm worried about having an async-mode only (which I think it a good idea overall) is the additional latency when using sub-second databases. We should make sure Superset is not adding a lot of latency when running the queries asynchronously — eg, if we're going to poll we should poll aggressively at first and then back off.

@michael-s-molina
Copy link
Member

@villebro Maybe it would be good to open a SIP as [WIP], and paste these last 4 comments there, so we don't lose this valuable feedback when discussing the SIP.

@villebro
Copy link
Member

villebro commented Jul 3, 2024

@villebro Maybe it would be good to open a SIP as [WIP], and paste these last 4 comments there, so we don't lose this valuable feedback when discussing the SIP.

If there's no major opposition to moving ahead with GAQ2 as a SIP then I can open it up today.

@villebro
Copy link
Member

villebro commented Jul 3, 2024

One thing I'm worried about having an async-mode only (which I think it a good idea overall) is the additional latency when using sub-second databases. We should make sure Superset is not adding a lot of latency when running the queries asynchronously — eg, if we're going to poll we should poll aggressively at first and then back off.

@betodealmeida this is actually a really good point - Celery will definitely add unpleasant overhead to sub-second dbs. So maybe we shouldn't totally remove async mode after all. But for Trino-type OLAPs I think it'll definitely be great.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
change:backend Requires changing the backend sip Superset Improvement Proposal
Projects
Status: Implemented / Done
Development

No branches or pull requests

10 participants