Fixed the behavior of the incremental schema change ignore option to properly handle the scenario when columns are dropped #980
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
resolves #
Problem
Fix the same issue that solved by databricks-dbt
databricks/dbt-databricks#580
When processing incrementally, adding new columns is ignored by the ignore setting. However, when a SQL model is modified to remove columns, it fails despite the ignore setting. This is because it attempts to query a column that does not exist in the created temp table. According to the dbt documentation, the job should be designed not to fail when ignored, so it has been corrected.
For example, in this use case, even if we remove column_2 from the SQL model, the query still attempts to include column_2 because it exists in the current table schema. However, since column_2 does not exist in the temporary table, the query fails.
The intended SQL insert statement looks like this:
Dbt documentation
So this should not be happen
https://docs.getdbt.com/docs/build/incremental-models#default-behavior
Solution
Checklist