-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for async path operations in the server #4617
Comments
Why don't we simply (at least) default to the async methods if they're available? |
im def open to that, it's the least work for people hosting the servers. in slack, however, @franciscojavierarceo expressed a preference for an explicit opt-in and that position is def reasonable. personally, im fine either way and happy to implement either one. iirc all data stores have an async method, just left unimplemented so we'd either want to remove it from the interface and check if the attribute exists or provide some kind of flag in a given impl saying async is available. |
made a branch of what the flag option would look like here |
@tokoko @franciscojavierarceo here is what it would look like if async support was set at the data store level. the key lines are here. im partial to this bc it's not doing anything magical, maintains default behavior where async support doesnt exist but also doesn't require work from the person deploying the service. if you can get behind this approach i can finish off the pr (get it passing and all that) and have it up in the next day or two. |
one last option here. It is def the cleanest and uses fastapi recommended approach, but makes endpoint async all the time ie not an opt in. this approach would be my preferred given that it follows fastapi patterns, doesn't involve having to know "too much" about feast internals to successfully deploy a service, and it is still explicit at the data store level as to which methods are supported. |
+1 on the async_supported method. I also think making the endpoints themselves async even if calls are synchronous is fine. I understand we're potentially degrading performance, but what's the chance of someone caring about all that and running sync online store calls at the same time? iiuc, the alternative is to have two separate "servers", right? Or is there a way to dynamically inject async qualifier to the endpoint functions? |
@tokoko, not exactly, but that's more or less what I did in option 1 and option 2. You write two path operations and the one that gets executed is controlled by the async flag/indicator. It's a little clumsy looking, makes the code bloated and less readable. I'm with you on just doing |
Is your feature request related to a problem? Please describe.
Add opt in support for async path operations on the python feature server. This would allow the server to take advantage of the async libraries that are slowly becoming available for the various data stores. Lookups to multiple feature views with online stores such as dynamo and redis can be done in parallel. This would provide significant performance gains when calling feature services.
Describe the solution you'd like
If we are willing to move to async methods for path operations, i'd like to to see
run_in_threadpool
used for the sync methods. example pr here. This is the recommended approach to sync operations in async methods and is more or less what fastapi does under the hood to sync endpoints anyway.If there is a preference for a manual opt in, then ...
A cli parameter on
feast serve
to indicate preference for deploying the async paths which will leverage the async feature store sdk methods.Under the hood it would declare
As async
store.push
functionality gets added an--async-push
flag could be added as well. This approach is an explicit opt-in which is desirable when overriding default behavior. Lets the person deploying the server make informed decisions about performance based upon the backend being used. The caller need not worry about the internals of a service, just the api contract.Describe alternatives you've considered
feast-async
butfeast
be the one universal clifeast serve async
Additional context
Here is a flame graph for a call to a feature service with
We see default batching behavior and 3 calls per batch all of which are done sequentially. These can be done in parallel.
The text was updated successfully, but these errors were encountered: