🎉 Redshift Destination: Disable STATUPDATE flag when using S3 staging to speed up performance #5745
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What
Describe what the change is solving
Apply optimization to redshift
#4871
How
Describe the solution
I changed the Redshift COPY command to speed up the process to copy csv files from staging s3 bucket to redshift. Specifically, I set the STATUPDATE parameter flag to be off. The STATUPDATE flag is responsible for running statistical computation to optimize future queries on the table. Since the table in our use case is temporary and the queries, computing these statisitics creates an unnecessary overhead. The performance saw an around 10% boost during my test runs.

## Recommended reading order 1. `RedshiftStreamCopier.java`Pre-merge Checklist
Community member or Airbyter
airbyte_secret
./gradlew :airbyte-integrations:connectors:<name>:integrationTest
.README.md
docs/integrations/<source or destination>/<name>.md
including changelog. See changelog exampleAirbyter
If this is a community PR, the Airbyte engineer reviewing this PR is responsible for the below items.
/test connector=connectors/<name>
command is passing./publish
command described here