-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote offline feature server deployment #4032
Comments
Hi @tokoko, project: mystore
provider: local
offline_store:
type: arrow # or remote, or similar
host: mystore.feast.svc.cluster.local
port: 8815 Can you confirm it matches, at least in general terms, your initial idea? 2- If the Arrow Server 3- Finally, why shouldn't we instead extend the feature server to add an endpoint for the offline features? Is it just a matter of the scale of the data being processed, or are there other factors at play? The reason for asking is that we're going to introduce another service in the architecture, possibly with multiple instances in the same cluster, in case of different repos, and this complexity could confuse users regarding the chosen direction. (*) To generalize this concept even further, and open the door to apply a consistent security model to the SDK clients, we could also think to implement RemoteOnlineStore and RemoteRegistryStore the same way (based on either the Arrow Server or the existing Feature Server), to simplify the interaction with the remote server(s) and have a unified client API hiding the complexity of the REST/gRPC calls. |
Hey, thanks for getting involved here :)
|
Just one thought: for easiness of use sdk or remote keep the same
|
@redhatHameed I'm not sure flag is necessary. If we replicate the way remote registry works, on server side
On the client-side, the user will have to configure it's client to query the server, it should have no knowledge whatsoever about with type of offline store the server's wrapping. So
|
@tokoko Thanks for clarifying and that make sense. |
@tokoko just to validate our understanding, is the expectation to implement all the methods of the OfflineStore interface (currently 5) in both the server and the client?
|
@dmartinol yes, but let me go a bit more into details just to make sure we're on the same page. The first three of those five methods are lazy, in other words they don't return datasets themselves, they return if the user instead opts to call |
agree, we must respect the lazy behavior. @redhatHameed we need to experiment on this one as a first step |
@tokoko one more clarification what you think about |
@dmartinol I think starting with @redhatHameed that's a very good point. I haven't thought much about it, but I think it should depend on which materialization engine the user has configured.
|
Thanks @tokoko in that case we can create a new issue to address the implementation of the |
@tokoko, I see that all the implementations of offline_job = self.offline_store.pull_latest_from_table_or_query(...) That said, if the client doing the materialization already has a remote |
yup, It would work through the flight server, but only in the sense that the source data will be pulled from there. All the rest of the logic (conversion to arrow and |
@tokoko me and @dmartinol created a draft PR. To get your initial input on whether we're heading in the right direction. Please take a look when you get time. Thanks cc @jeremyary |
@redhatHameed thanks, great work. looks good overall, I'll leave the comments in the PR. |
@tokoko Thanks for the review and comments. Before moving further with this approach do you have idea for implementing a security model that aligns with this (arrow flight server )approach and that fits with other components ? |
I might be delving into "speculation" territory here, but I can try to describe a high-level overview of what I'm expecting from the security model.
The upside of the second approach is that it's a lot less invasive than the first one. You could potentially end up with a setup where permissions and objects are managed with some level of separation between them. I think I'm more in favor of this.
P.S Mostly just thinking out loud here, I might be totally overengineering this 😄 |
@tokoko, do we have a separate issue dedicated to discussing these details? It appears that the topic is more generic than just the remote offline feature server, and your hints have been really valuable. Having a dedicated space for discussion until we reach a conclusion might be beneficial. |
No, we don't. I'll go ahead and split the last comment off as a separate issue. |
@tokoko whats the benefit of a remote server?aren't we then adding additional network overhead? The recommended model, in my opinion should be:
(2) can be evaluated using the python server code. This is how I did it previously and it worked well but curious to understand benefits of the remote approach. |
Yes, it would for most actions add an additional network overhead, that's why we're strictly talking about it as an optional add-on. The biggest upside, the only one really, is that a remote deployment allows you to put it behind your own security layer (3A) which is impossible when a client has a direct access to the data with SDK.
I'm not sure what you mean here. |
@franciscojavierarceo isn't this the Feast client? I mean, when you configure the client-side store as: offline_store:
type: remote
host: mystore.feast.svc.cluster.local
port: 8815 and then run the queries like |
Is your feature request related to a problem? Please describe.
Currently Feast has the capability to deploy both online store (feature server) and registry as standalone services that feast library can access remotely. I'd like to see the same capability with offline store deployment. The primary goal of the feature is to enable fully remote deployment of all feast components and in future enable a common enterprise-level security model for all three components.
Describe the solution you'd like
There are two possible general solutions I have in mind:
We can deploy an offline feature server as an Apache Arrow Flight Server that will wrap calls to offline store implementations and expose interface methods as Arrow Flight endpoints. This is also something that Hopsworks seems to have implemented before. They are using duckdb engine behind a flight server.
Offline feature server can be deployed as a lightweight rest or grpc server that only handles requests and metadata and utilizes cloud filesystems for data transfer to and from the user. For example in case of s3, feature server would write out the results as a parquet dataset to a folder in an s3 bucket, generate presigned urls for it and return a list of urls to the caller. The idea is to emulate the mechanics of delta sharing but avoid it's limitations (in delta-sharing, dataset needs be in
delta
format and metadata responses must be immediate, which is obviously not an option for feast).In both scenarios, client-side code should be virtually indistinguishable from other offline store implementations. The particulars of the implementations will be hidden behind RemoteOfflineStore class.
The text was updated successfully, but these errors were encountered: