You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues, and I could not find an existing issue for this bug
Current Behavior
When running an incremental table with the on_schema_change policy set to append_new_columns, a race condition can occur. If two jobs perform the column check at the same time, the same ALTER TABLE statement will be generated, resulting in a SQL compilation error on the slower executor:
The core issue is that we cannot ensure the column schema remains unchanged between the column check operation and the ALTER TABLE statement.
Expected Behavior
To avoid this issue, the process should either utilize the IF NOT EXISTS and IF EXISTS conditions provided by Snowflake or make use of transactions so that column checkup and update is atomic.
Steps To Reproduce
Create an Incremental Table:
Define and create an incremental table in your dbt project.
Add a Column:
Add a new column entry to the model.
Concurrently Execute dbt run:
Execute the dbt process in two separate jobs concurrently.
@rattata2me Could you say more about your use case here? Is there a reason why you would want to have concurrent execution against the same target (especially given that you are changing the target table schema with a new column).
@rattata2me Could you say more about your use case here? Is there a reason why you would want to have concurrent execution against the same target (especially given that you are changing the target table schema with a new column).
I have to dynamically update the table contents every time there is a new data ingestion, the data ingestion can be concurrent; coming from different users at the same time. This is a critical process which has to have 100% up time so I depend on the logic of on schema change to roll out column changes while keeping the service running.
Is this a new bug in dbt-snowflake?
Current Behavior
When running an incremental table with the on_schema_change policy set to append_new_columns, a race condition can occur. If two jobs perform the column check at the same time, the same ALTER TABLE statement will be generated, resulting in a SQL compilation error on the slower executor:
The core issue is that we cannot ensure the column schema remains unchanged between the column check operation and the ALTER TABLE statement.
Expected Behavior
To avoid this issue, the process should either utilize the IF NOT EXISTS and IF EXISTS conditions provided by Snowflake or make use of transactions so that column checkup and update is atomic.
Steps To Reproduce
Define and create an incremental table in your dbt project.
Add a new column entry to the model.
dbt run
:Execute the dbt process in two separate jobs concurrently.
Relevant log output
No response
Environment
Additional Context
No response
The text was updated successfully, but these errors were encountered: