-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EMR launcher #1061
EMR launcher #1061
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -364,11 +364,13 @@ def sync_offline_to_online(feature_table: str, start_time: str, end_time: str): | |
""" | ||
Sync offline store to online. | ||
""" | ||
import feast.pyspark.aws.jobs | ||
from datetime import datetime | ||
|
||
client = Client() | ||
table = client.get_feature_table(feature_table) | ||
feast.pyspark.aws.jobs.sync_offline_to_online(client, table, start_time, end_time) | ||
client.start_offline_to_online_ingestion( | ||
table, datetime.fromisoformat(start_time), datetime.fromisoformat(end_time) | ||
) | ||
|
||
|
||
@cli.command() | ||
|
@@ -424,5 +426,40 @@ def list_emr_jobs(): | |
) | ||
|
||
|
||
@cli.command() | ||
@click.option( | ||
"--features", | ||
"-f", | ||
help="Features in feature_table:feature format, comma separated", | ||
required=True, | ||
) | ||
@click.option( | ||
"--entity-df-path", | ||
"-e", | ||
help="Path to entity df in CSV format. It is assumed to have event_timestamp column and a header.", | ||
required=True, | ||
) | ||
@click.option("--destination", "-d", help="Destination", default="") | ||
def get_historical_features(features: str, entity_df_path: str, destination: str): | ||
""" | ||
Get historical features | ||
""" | ||
import pandas | ||
|
||
client = Client() | ||
|
||
# TODO: clean this up | ||
entity_df = pandas.read_csv(entity_df_path, sep=None, engine="python",) | ||
|
||
entity_df["event_timestamp"] = pandas.to_datetime(entity_df["event_timestamp"]) | ||
|
||
uploaded_df = client.stage_dataframe( | ||
entity_df, "event_timestamp", "created_timestamp" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just a heads up: "created_timestamp" is actually supposed to be optional for the entity, so it's a bug that we need to resolve in another PR. |
||
) | ||
|
||
job = client.get_historical_features(features.split(","), uploaded_df,) | ||
print(job.get_output_file_uri()) | ||
|
||
|
||
if __name__ == "__main__": | ||
cli() |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -76,6 +76,7 @@ | |
from feast.online_response import OnlineResponse, _infer_online_entity_rows | ||
from feast.pyspark.abc import RetrievalJob, SparkJob | ||
from feast.pyspark.launcher import ( | ||
stage_dataframe, | ||
start_historical_feature_retrieval_job, | ||
start_historical_feature_retrieval_spark_session, | ||
start_offline_to_online_ingestion, | ||
|
@@ -885,9 +886,16 @@ def _get_feature_tables_from_feature_refs( | |
return feature_tables | ||
|
||
def start_offline_to_online_ingestion( | ||
self, | ||
feature_table: Union[FeatureTable, str], | ||
start: Union[datetime, str], | ||
end: Union[datetime, str], | ||
self, feature_table: Union[FeatureTable, str], start: datetime, end: datetime, | ||
) -> SparkJob: | ||
return start_offline_to_online_ingestion(feature_table, start, end, self) # type: ignore | ||
|
||
def stage_dataframe( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We actually already have method in the Feast Client that takes dataframe and put it in offline storage. Currently it called There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In my mind this is somewhat different from ingest. It is not intended for permanent storage, this is a convenience method "take this dataframe, put it in some temp location where Spark can access it". I agree launcher might not be the best place for it, just gotta be some code that can read staging_location from the config to construct the temp path. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if it's some upload-to-temp-location function - it's probably shouldn't be part of Feast Client API. maybe contrib? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just the user convenience - if you're getting started with feast and want to run historical retrieval, we can upload your pandas entity dataframe to S3 so you don't have to think about how to upload it and what bucket to use. We'll just put it in staging location for you. Basically trying to remove an extra friction point in onboarding and tutorials. Right now it is only used for CLI historical-retrieval command. I agree it may not be the best place for it, but at the same time it needs to have access to the config to figure out where to upload the dataframe. So i can't make it completely detached from the client (that has the config object) |
||
self, | ||
df: pd.DataFrame, | ||
event_timestamp_column: str, | ||
created_timestamp_column: str, | ||
) -> FileSource: | ||
return stage_dataframe( | ||
df, event_timestamp_column, created_timestamp_column, self | ||
) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -773,7 +773,7 @@ def _feature_table_from_dict(dct: Dict[str, Any]) -> FeatureTable: | |
spark = SparkSession.builder.getOrCreate() | ||
args = _get_args() | ||
feature_tables_conf = json.loads(args.feature_tables) | ||
feature_tables_sources_conf = json.loads(args.feature_tables_source) | ||
feature_tables_sources_conf = json.loads(args.feature_tables_sources) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for catching the typo |
||
entity_source_conf = json.loads(args.entity_source) | ||
destination_conf = json.loads(args.destination) | ||
start_job( | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
from .emr import EmrClusterLauncher, EmrIngestionJob, EmrRetrievalJob | ||
|
||
__all__ = ["EmrRetrievalJob", "EmrIngestionJob", "EmrClusterLauncher"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if it might be better if the users are expected to provide a uri that is recognizable by the Spark Launcher, such as s3:// for EMR, gs:// for Dataproc, and file:// for standalone cluster launchers running locally. That way, we skip the process of reading to Pandas dataframe and convert the file again.
Staged panda dataframe is still a useful method to have though, because we plan to add support to pandas dataframe as input argument for historical feature retrieval method, for Feast Client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, it is mostly for convenience/testing for now, to reduce a # of steps that someone need to do to see if historical retrieval works. I wouldn't expect people to normally use local CSV for entity dfs. I'd tweak this interface in later PRs though.