-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Bump psycopg2 to psycopg3 for all Postgres components #4303
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some additional clarifications from my side!
class ZeroRowsQueryResult(Exception): | ||
def __init__(self, query: str): | ||
super().__init__(f"This query returned zero rows:\n{query}") | ||
|
||
|
||
class ZeroColumnQueryResult(Exception): | ||
def __init__(self, query: str): | ||
super().__init__(f"This query returned zero columns:\n{query}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exceptions to use for stricter handling of type hints of psycopg3
query = f""" | ||
SELECT | ||
MIN({entity_df_event_timestamp_col}) AS min, | ||
MAX({entity_df_event_timestamp_col}) AS max | ||
FROM ({entity_df}) AS tmp_alias | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No updates here, only re-formatting the query
@@ -64,57 +75,56 @@ def online_write_batch( | |||
Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]] | |||
], | |||
progress: Optional[Callable[[int], Any]], | |||
batch_size: int = 5000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make configurable, addressing #4036
# Format insert values | ||
insert_values = [] | ||
for entity_key, values, timestamp, created_ts in data: | ||
entity_key_bin = serialize_entity_key( | ||
entity_key, | ||
entity_key_serialization_version=config.entity_key_serialization_version, | ||
) | ||
timestamp = _to_naive_utc(timestamp) | ||
if created_ts is not None: | ||
created_ts = _to_naive_utc(created_ts) | ||
|
||
with self._get_conn(config) as conn, conn.cursor() as cur: | ||
insert_values = [] | ||
for entity_key, values, timestamp, created_ts in data: | ||
entity_key_bin = serialize_entity_key( | ||
entity_key, | ||
entity_key_serialization_version=config.entity_key_serialization_version, | ||
) | ||
timestamp = _to_naive_utc(timestamp) | ||
if created_ts is not None: | ||
created_ts = _to_naive_utc(created_ts) | ||
|
||
for feature_name, val in values.items(): | ||
vector_val = None | ||
if config.online_store.pgvector_enabled: | ||
vector_val = get_list_val_str(val) | ||
insert_values.append( | ||
( | ||
entity_key_bin, | ||
feature_name, | ||
val.SerializeToString(), | ||
vector_val, | ||
timestamp, | ||
created_ts, | ||
) | ||
for feature_name, val in values.items(): | ||
vector_val = None | ||
if config.online_store.pgvector_enabled: | ||
vector_val = get_list_val_str(val) | ||
insert_values.append( | ||
( | ||
entity_key_bin, | ||
feature_name, | ||
val.SerializeToString(), | ||
vector_val, | ||
timestamp, | ||
created_ts, | ||
) | ||
# Control the batch so that we can update the progress | ||
batch_size = 5000 | ||
) | ||
|
||
# Create insert query | ||
sql_query = sql.SQL( | ||
""" | ||
INSERT INTO {} | ||
(entity_key, feature_name, value, vector_value, event_ts, created_ts) | ||
VALUES (%s, %s, %s, %s, %s, %s) | ||
ON CONFLICT (entity_key, feature_name) DO | ||
UPDATE SET | ||
value = EXCLUDED.value, | ||
vector_value = EXCLUDED.vector_value, | ||
event_ts = EXCLUDED.event_ts, | ||
created_ts = EXCLUDED.created_ts; | ||
""" | ||
).format(sql.Identifier(_table_id(config.project, table))) | ||
|
||
# Push data in batches to online store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No changes here, only moving code further up in the function to make it more readable.
""" | ||
INSERT INTO {} | ||
(entity_key, feature_name, value, vector_value, event_ts, created_ts) | ||
VALUES (%s, %s, %s, %s, %s, %s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 out of 2 actual changes to the function:
We need to explicitly set the number of placeholder values.
cur_batch, | ||
page_size=batch_size, | ||
) | ||
cur.executemany(sql_query, cur_batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 out of 2 actual changes to the function:
The psycopg2.extras.execute_values
functionality is removed in psycopg3
. The maintainer of psycopg3
advices to use executemany
. See psycopg/psycopg#576 and psycopg/psycopg#114
values_dict[ | ||
row[0] if isinstance(row[0], bytes) else row[0].tobytes() | ||
].append(row[1:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only call tobytes()
when row[0]
is not already of bytes type. Otherwise, this will result in Errors.
def _get_conninfo(config: PostgreSQLConfig) -> str: | ||
"""Get the `conninfo` argument required for connection objects.""" | ||
return ( | ||
f"postgresql://{config.user}" | ||
f":{config.password}" | ||
f"@{config.host}" | ||
f":{int(config.port)}" | ||
f"/{config.database}" | ||
) | ||
|
||
|
||
def _get_conn_kwargs(config: PostgreSQLConfig) -> Dict[str, Any]: | ||
"""Get the additional `kwargs` required for connection objects.""" | ||
return { | ||
"sslmode": config.sslmode, | ||
"sslkey": config.sslkey_path, | ||
"sslcert": config.sslcert_path, | ||
"sslrootcert": config.sslrootcert_path, | ||
"options": "-c search_path={}".format(config.db_schema or config.user), | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Helper functions to prevent code duplication in the above methods.
nr_columns = df.shape[1] | ||
placeholders = ", ".join(["%s"] * nr_columns) | ||
query = f"INSERT INTO {table_name} VALUES ({placeholders})" | ||
values = df.replace({np.NaN: None}).to_numpy().tolist() | ||
|
||
with _get_conn(config) as conn, conn.cursor() as cur: | ||
cur.execute(_df_to_create_table_sql(df, table_name)) | ||
psycopg2.extras.execute_values( | ||
cur, | ||
f""" | ||
INSERT INTO {table_name} | ||
VALUES %s | ||
""", | ||
df.replace({np.NaN: None}).to_numpy(), | ||
) | ||
cur.executemany(query, values) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved the parsing of variables further to the top of the function.
- Again, we need to replace
execute_values
byexecutemany
. - Again, we need to explicitly set the number of placeholders. Since this function should be able to handle a dynamic amount of columns, we use the
placeholders
variable
@pytest.mark.parametrize( | ||
"conn_type", | ||
[ConnectionType.singleton, ConnectionType.pool], | ||
ids=lambda v: f"conn_type:{v}", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test both ConnectionTypes
e2d2188
to
38a376f
Compare
@@ -20,32 +20,38 @@ charset-normalizer==3.3.2 | |||
# via requests | |||
click==8.1.7 | |||
# via | |||
# feast (setup.py) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you use lock-python-dependencies-all
to generate these files? feast (setup.py)
lines shouldn't have been added, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I used that command indeed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any thoughts on what might be causing this and how to resolve it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure honestly, I'll try to look into it. We can still merge regardless, it's not a blocker, just a bunch of extra line changes in the PR.
sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Hey @franciscojavierarceo, would you perhaps be able to do another pass on this PR? :) |
Looks like I merged the other pr caused conflicts here. mind fix it then we can merge it? |
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
b94fb44
to
7165c51
Compare
I just pushed the update! @HaoXuAI |
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Set connection read only Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Addition Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Use new ConnectionPool Pass kwargs as named argument Use executemany over execute_values Remove not-required open argument in psycopg.connect Improve Use SpooledTemporaryFile Use max_size and add docstring Properly write with StringIO Utils: Use SpooledTemporaryFile over StringIO object Add replace Fix df_to_postgres_table Remove import Utils Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Add log statement Lint: Fix _to_arrow_internal Lint: Fix _get_entity_df_event_timestamp_range Update exception Use ZeroColumnQueryResult Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Update warning Fix Format warning Add typehints Use better variable name Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
7165c51
to
3328530
Compare
We tried to install and run this feature branch already in one of our downstream projects. We noticed that there is an (edge) case where the required dependency combination of One of Feast' requirements is SQLAlchemy>=1. This enables The question now is how to handle this. We think there are a number of potential solutions:
There are more ways to tackle this issue, but I'm curious to hear your thoughts on what you think might be the best way forward. @tokoko I would love to hear your thoughts as well on this matter if you have the time :) |
any reason we are not able to bump sqlAlchemy to 2.0+? |
@job-almekinders Thanks for pointing that out. However, I don't believe this warrants any action from us. Those kinds of diamond dependencies are just part of life in libraries like this, unfortunately. I'm sure there are a few others similar to this lurking around that we haven't noticed before. If a user has some hard dependency on sqlalchemy1 and also uses postgres in feast only for the registry, they are free to forgo
I guess we can, but as long as we don't absolutely need to, it's better to leave it as is not to cause unnecessary diamond dependency problems to downstream libraries. |
@HaoXuAI This will probably break the Snowflake integration, as SQLAlchemy 2 isn't supported yet by the Snowflake Python libraries. See this relevant issue: snowflakedb/snowflake-sqlalchemy#380 |
That makes sense ! If this is no blocker then I think we are good to move forward with this PR, at least from our end :) |
I think snowflake can use snowflake-python module which doesn't depend on sqlAchemy. |
I think he meant snowflake-backed sql registry, not a separate snowflake registry.
I'm fine with removing EOL versions if that will get us anything. Probably best to follow up in a different issue, though. @HaoXuAI This is good to merge, right? |
…v#4303) * Makefile: Formatting Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Makefile: Exclude Snowflake tests for postgres offline store tests Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Bootstrap: Use conninfo Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Tests: Make connection string compatible with psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Tests: Test connection type pool and singleton Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Global: Replace conn.set_session() calls to be psycopg3 compatible Set connection read only Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Offline: Use psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Use psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Restructure online_write_batch Addition Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Use correct placeholder Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Handle bytes properly in online_read() Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Whitespace Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Open ConnectionPool Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Add typehint Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Utils: Use psycopg3 Use new ConnectionPool Pass kwargs as named argument Use executemany over execute_values Remove not-required open argument in psycopg.connect Improve Use SpooledTemporaryFile Use max_size and add docstring Properly write with StringIO Utils: Use SpooledTemporaryFile over StringIO object Add replace Fix df_to_postgres_table Remove import Utils Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Lint: Raise exceptions if cursor returned no columns or rows Add log statement Lint: Fix _to_arrow_internal Lint: Fix _get_entity_df_event_timestamp_range Update exception Use ZeroColumnQueryResult Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Add comment on +psycopg string Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Docs: Remove mention of psycopg2 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Lint: Fix Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Default to postgresql+psycopg and log warning Update warning Fix Format warning Add typehints Use better variable name Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Solve merge conflicts Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> --------- Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
…v#4303) * Makefile: Formatting Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Makefile: Exclude Snowflake tests for postgres offline store tests Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Bootstrap: Use conninfo Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Tests: Make connection string compatible with psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Tests: Test connection type pool and singleton Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Global: Replace conn.set_session() calls to be psycopg3 compatible Set connection read only Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Offline: Use psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Use psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Restructure online_write_batch Addition Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Use correct placeholder Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Handle bytes properly in online_read() Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Whitespace Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Open ConnectionPool Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Add typehint Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Utils: Use psycopg3 Use new ConnectionPool Pass kwargs as named argument Use executemany over execute_values Remove not-required open argument in psycopg.connect Improve Use SpooledTemporaryFile Use max_size and add docstring Properly write with StringIO Utils: Use SpooledTemporaryFile over StringIO object Add replace Fix df_to_postgres_table Remove import Utils Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Lint: Raise exceptions if cursor returned no columns or rows Add log statement Lint: Fix _to_arrow_internal Lint: Fix _get_entity_df_event_timestamp_range Update exception Use ZeroColumnQueryResult Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Add comment on +psycopg string Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Docs: Remove mention of psycopg2 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Lint: Fix Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Default to postgresql+psycopg and log warning Update warning Fix Format warning Add typehints Use better variable name Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Solve merge conflicts Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> --------- Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
…v#4303) * Makefile: Formatting Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Makefile: Exclude Snowflake tests for postgres offline store tests Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Bootstrap: Use conninfo Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Tests: Make connection string compatible with psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Tests: Test connection type pool and singleton Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Global: Replace conn.set_session() calls to be psycopg3 compatible Set connection read only Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Offline: Use psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Use psycopg3 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Restructure online_write_batch Addition Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Use correct placeholder Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Handle bytes properly in online_read() Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Whitespace Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Open ConnectionPool Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Online: Add typehint Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Utils: Use psycopg3 Use new ConnectionPool Pass kwargs as named argument Use executemany over execute_values Remove not-required open argument in psycopg.connect Improve Use SpooledTemporaryFile Use max_size and add docstring Properly write with StringIO Utils: Use SpooledTemporaryFile over StringIO object Add replace Fix df_to_postgres_table Remove import Utils Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Lint: Raise exceptions if cursor returned no columns or rows Add log statement Lint: Fix _to_arrow_internal Lint: Fix _get_entity_df_event_timestamp_range Update exception Use ZeroColumnQueryResult Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Add comment on +psycopg string Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Docs: Remove mention of psycopg2 Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Lint: Fix Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Default to postgresql+psycopg and log warning Update warning Fix Format warning Add typehints Use better variable name Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> * Solve merge conflicts Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com> --------- Signed-off-by: Job Almekinders <job.almekinders@teampicnic.com>
# [0.40.0](v0.39.0...v0.40.0) (2024-07-31) ### Bug Fixes * Added missing type ([#4315](#4315)) ([86af60a](86af60a)) * Avoid XSS attack from Jinjin2's Environment(). ([#4355](#4355)) ([40270e7](40270e7)) * CGO Memory leak issue in GO Feature server ([#4291](#4291)) ([43e198f](43e198f)) * Deprecated the datetime.utcfromtimestamp(). ([#4306](#4306)) ([21deec8](21deec8)) * Fix SQLite import issue ([#4294](#4294)) ([398ea3b](398ea3b)) * Increment operator to v0.39.0 ([#4368](#4368)) ([3ddb4fb](3ddb4fb)) * Minor typo in the unit test. ([#4296](#4296)) ([6c75e84](6c75e84)) * OnDemandFeatureView type inference for array types ([#4310](#4310)) ([c45ff72](c45ff72)) * Remove redundant batching in PostgreSQLOnlineStore.online_write_batch and fix progress bar ([#4331](#4331)) ([0d89d15](0d89d15)) * Remove typo. ([#4351](#4351)) ([92d17de](92d17de)) * Retire the datetime.utcnow(). ([#4352](#4352)) ([a8bc696](a8bc696)) * Update dask version to support pandas 1.x ([#4326](#4326)) ([a639d61](a639d61)) * Update Feast object metadata in the registry ([#4257](#4257)) ([8028ae0](8028ae0)) * Using one single function call for utcnow(). ([#4307](#4307)) ([98ff63c](98ff63c)) ### Features * Add async feature retrieval for Postgres Online Store ([#4327](#4327)) ([cea52e9](cea52e9)) * Add Async refresh to Sql Registry ([#4251](#4251)) ([f569786](f569786)) * Add SingleStore as an OnlineStore ([#4285](#4285)) ([2c38946](2c38946)) * Add Tornike to maintainers.md ([#4339](#4339)) ([8e8c1f2](8e8c1f2)) * Bump psycopg2 to psycopg3 for all Postgres components ([#4303](#4303)) ([9451d9c](9451d9c)) * Entity key deserialization ([#4284](#4284)) ([83fad15](83fad15)) * Ignore paths feast apply ([#4276](#4276)) ([b4d54af](b4d54af)) * Move get_online_features to OnlineStore interface ([#4319](#4319)) ([7072fd0](7072fd0)) * Port mssql contrib offline store to ibis ([#4360](#4360)) ([7914cbd](7914cbd)) ### Reverts * Revert "fix: Avoid XSS attack from Jinjin2's Environment()." ([#4357](#4357)) ([cdeab48](cdeab48)), closes [#4355](#4355)
# [0.40.0](feast-dev/feast@v0.39.0...v0.40.0) (2024-07-31) ### Bug Fixes * Added missing type ([feast-dev#4315](feast-dev#4315)) ([86af60a](feast-dev@86af60a)) * Avoid XSS attack from Jinjin2's Environment(). ([feast-dev#4355](feast-dev#4355)) ([40270e7](feast-dev@40270e7)) * CGO Memory leak issue in GO Feature server ([feast-dev#4291](feast-dev#4291)) ([43e198f](feast-dev@43e198f)) * Deprecated the datetime.utcfromtimestamp(). ([feast-dev#4306](feast-dev#4306)) ([21deec8](feast-dev@21deec8)) * Fix SQLite import issue ([feast-dev#4294](feast-dev#4294)) ([398ea3b](feast-dev@398ea3b)) * Increment operator to v0.39.0 ([feast-dev#4368](feast-dev#4368)) ([3ddb4fb](feast-dev@3ddb4fb)) * Minor typo in the unit test. ([feast-dev#4296](feast-dev#4296)) ([6c75e84](feast-dev@6c75e84)) * OnDemandFeatureView type inference for array types ([feast-dev#4310](feast-dev#4310)) ([c45ff72](feast-dev@c45ff72)) * Remove redundant batching in PostgreSQLOnlineStore.online_write_batch and fix progress bar ([feast-dev#4331](feast-dev#4331)) ([0d89d15](feast-dev@0d89d15)) * Remove typo. ([feast-dev#4351](feast-dev#4351)) ([92d17de](feast-dev@92d17de)) * Retire the datetime.utcnow(). ([feast-dev#4352](feast-dev#4352)) ([a8bc696](feast-dev@a8bc696)) * Update dask version to support pandas 1.x ([feast-dev#4326](feast-dev#4326)) ([a639d61](feast-dev@a639d61)) * Update Feast object metadata in the registry ([feast-dev#4257](feast-dev#4257)) ([8028ae0](feast-dev@8028ae0)) * Using one single function call for utcnow(). ([feast-dev#4307](feast-dev#4307)) ([98ff63c](feast-dev@98ff63c)) ### Features * Add async feature retrieval for Postgres Online Store ([feast-dev#4327](feast-dev#4327)) ([cea52e9](feast-dev@cea52e9)) * Add Async refresh to Sql Registry ([feast-dev#4251](feast-dev#4251)) ([f569786](feast-dev@f569786)) * Add SingleStore as an OnlineStore ([feast-dev#4285](feast-dev#4285)) ([2c38946](feast-dev@2c38946)) * Add Tornike to maintainers.md ([feast-dev#4339](feast-dev#4339)) ([8e8c1f2](feast-dev@8e8c1f2)) * Bump psycopg2 to psycopg3 for all Postgres components ([feast-dev#4303](feast-dev#4303)) ([9451d9c](feast-dev@9451d9c)) * Entity key deserialization ([feast-dev#4284](feast-dev#4284)) ([83fad15](feast-dev@83fad15)) * Ignore paths feast apply ([feast-dev#4276](feast-dev#4276)) ([b4d54af](feast-dev@b4d54af)) * Move get_online_features to OnlineStore interface ([feast-dev#4319](feast-dev#4319)) ([7072fd0](feast-dev@7072fd0)) * Port mssql contrib offline store to ibis ([feast-dev#4360](feast-dev#4360)) ([7914cbd](feast-dev@7914cbd)) ### Reverts * Revert "fix: Avoid XSS attack from Jinjin2's Environment()." ([feast-dev#4357](feast-dev#4357)) ([cdeab48](feast-dev@cdeab48)), closes [feast-dev#4355](feast-dev#4355)
# [0.40.0](feast-dev/feast@v0.39.0...v0.40.0) (2024-07-31) ### Bug Fixes * Added missing type ([feast-dev#4315](feast-dev#4315)) ([86af60a](feast-dev@86af60a)) * Avoid XSS attack from Jinjin2's Environment(). ([feast-dev#4355](feast-dev#4355)) ([40270e7](feast-dev@40270e7)) * CGO Memory leak issue in GO Feature server ([feast-dev#4291](feast-dev#4291)) ([43e198f](feast-dev@43e198f)) * Deprecated the datetime.utcfromtimestamp(). ([feast-dev#4306](feast-dev#4306)) ([21deec8](feast-dev@21deec8)) * Fix SQLite import issue ([feast-dev#4294](feast-dev#4294)) ([398ea3b](feast-dev@398ea3b)) * Increment operator to v0.39.0 ([feast-dev#4368](feast-dev#4368)) ([3ddb4fb](feast-dev@3ddb4fb)) * Minor typo in the unit test. ([feast-dev#4296](feast-dev#4296)) ([6c75e84](feast-dev@6c75e84)) * OnDemandFeatureView type inference for array types ([feast-dev#4310](feast-dev#4310)) ([c45ff72](feast-dev@c45ff72)) * Remove redundant batching in PostgreSQLOnlineStore.online_write_batch and fix progress bar ([feast-dev#4331](feast-dev#4331)) ([0d89d15](feast-dev@0d89d15)) * Remove typo. ([feast-dev#4351](feast-dev#4351)) ([92d17de](feast-dev@92d17de)) * Retire the datetime.utcnow(). ([feast-dev#4352](feast-dev#4352)) ([a8bc696](feast-dev@a8bc696)) * Update dask version to support pandas 1.x ([feast-dev#4326](feast-dev#4326)) ([a639d61](feast-dev@a639d61)) * Update Feast object metadata in the registry ([feast-dev#4257](feast-dev#4257)) ([8028ae0](feast-dev@8028ae0)) * Using one single function call for utcnow(). ([feast-dev#4307](feast-dev#4307)) ([98ff63c](feast-dev@98ff63c)) ### Features * Add async feature retrieval for Postgres Online Store ([feast-dev#4327](feast-dev#4327)) ([cea52e9](feast-dev@cea52e9)) * Add Async refresh to Sql Registry ([feast-dev#4251](feast-dev#4251)) ([f569786](feast-dev@f569786)) * Add SingleStore as an OnlineStore ([feast-dev#4285](feast-dev#4285)) ([2c38946](feast-dev@2c38946)) * Add Tornike to maintainers.md ([feast-dev#4339](feast-dev#4339)) ([8e8c1f2](feast-dev@8e8c1f2)) * Bump psycopg2 to psycopg3 for all Postgres components ([feast-dev#4303](feast-dev#4303)) ([9451d9c](feast-dev@9451d9c)) * Entity key deserialization ([feast-dev#4284](feast-dev#4284)) ([83fad15](feast-dev@83fad15)) * Ignore paths feast apply ([feast-dev#4276](feast-dev#4276)) ([b4d54af](feast-dev@b4d54af)) * Move get_online_features to OnlineStore interface ([feast-dev#4319](feast-dev#4319)) ([7072fd0](feast-dev@7072fd0)) * Port mssql contrib offline store to ibis ([feast-dev#4360](feast-dev#4360)) ([7914cbd](feast-dev@7914cbd)) ### Reverts * Revert "fix: Avoid XSS attack from Jinjin2's Environment()." ([feast-dev#4357](feast-dev#4357)) ([cdeab48](feast-dev@cdeab48)), closes [feast-dev#4355](feast-dev#4355)
What this PR does / why we need it:
This PR upgrades the
psycopg2
dependency to the newerpsycopg3
dependency. See here for more information on the differences between the two versions.This is the 1st out of 2 PRs, required to enable async feature retrieval for the Postgres Online Store.
While here:
batch_size
argument configurable for the postgres online store materialization function. This Fixes: Discussion: Pushing batches of data to online store: Shouldconn.commit()
happen in the for loop or after? #4036Additional remarks
The changes in this commit are related to the linter. In psycopg3, stricter type hints on the Cursor object require handling cases where cursor.description might be None. Although psycopg2 could also return None for this, it wasn't previously accounted for.
Which issue(s) this PR fixes:
1st out of 2 PRs required to fix #4260