This document is meant to help you migrate your Terraform config to the new newest version. In migration guides, we will only describe deprecations or breaking changes and help you to change your configuration to keep the same (or similar) behavior across different versions.
(breaking change) database roles data source; field rename, schema structure changes, and adding missing filtering options
database
renamed toin_database
- Added
like
andlimit
filtering options SHOW DATABASE ROLES
output is now put insidedatabase_roles.*.show_output
. Here's the list of currently available fields:created_on
name
is_default
is_current
is_inherited
granted_to_roles
granted_to_database_roles
granted_database_roles
owner
comment
owner_role_type
New filtering options:
in
like
starts_with
limit
with_describe
New output fields
show_output
describe_output
Breaking changes:
database
andschema
are right now underin
fieldviews
field now organizes output of show undershow_output
field and the output of describe underdescribe_output
field.
New fields:
row_access_policy
aggregation_policy
change_tracking
is_recursive
is_temporary
data_metric_schedule
data_metric_function
column
- added
show_output
field that holds the response from SHOW VIEWS. - added
describe_output
field that holds the response from DESCRIBE VIEW. Note that one needs to grant sufficient privileges e.g. with grant_ownership on the tables used in this view. Otherwise, this field is not filled.
Removed fields:
or_replace
-OR REPLACE
is added by the provider automatically whencopy_grants
is set to"true"
tag
- Please, use tag_association instead. The value of these field will be removed from the state automatically.
For this resource, the provider now uses policy references which requires a warehouse in the connection. Please, make sure you have either set a DEFAULT_WAREHOUSE for the user, or specified a warehouse in the provider configuration.
During identifiers rework we decided to
migrate resource ids from pipe-separated to regular Snowflake identifiers (e.g. <database_name>|<schema_name>
-> "<database_name>"."<schema_name>"
).
Exception to that rule will be identifiers that consist of multiple parts (like in the case of grant_privileges_to_account_role's resource id).
The change was applied to already refactored resources (only in the case of snowflake_schema
and snowflake_streamlit
this will be a breaking change, because the rest of the objects are single part identifiers in the format of <name>
):
snowflake_api_authentication_integration_with_authorization_code_grant
snowflake_api_authentication_integration_with_client_credentials
snowflake_api_authentication_integration_with_jwt_bearer
snowflake_oauth_integration_for_custom_clients
snowflake_oauth_integration_for_partner_applications
snowflake_external_oauth_integration
snowflake_saml2_integration
snowflake_scim_integration
snowflake_database
snowflake_shared_database
snowflake_secondary_database
snowflake_account_role
snowflake_network_policy
snowflake_warehouse
No change is required, the state will be migrated automatically. The rest of the objects will be changed when working on them during v1 object preparations.
(The same set of resources listed above was adjusted) To prevent issues like this one, we added diff suppress function that prevents Terraform from showing differences, when only quoting is different. In some cases, Snowflake output (mostly from SHOW commands) was dictating which field should be additionally quoted and which shouldn't, but that should no longer be the case. Like in the change above, the rest of the objects will be changed when working on them during v1 object preparations.
We added a new fully_qualified_name
to snowflake resources. This should help with referencing other resources in fields that expect a fully qualified name. For example, instead of
writing
object_name = “\”${snowflake_table.database}\”.\”${snowflake_table.schema}\”.\”${snowflake_table.name}\””
now we can write
object_name = snowflake_table.fully_qualified_name
See more details in identifiers guide.
See example usage.
Some of the resources are excluded from this change:
- deprecated resources
snowflake_database_old
snowflake_oauth_integration
snowflake_saml_integration
- resources for which fully qualified name is not appropriate
snowflake_account_parameter
snowflake_account_password_policy_attachment
snowflake_network_policy_attachment
snowflake_session_parameter
snowflake_table_constraint
snowflake_table_column_masking_policy_application
snowflake_tag_masking_policy_association
snowflake_tag_association
snowflake_user_password_policy_attachment
snowflake_user_public_keys
- grant resources
(breaking change) removed qualified_name
from snowflake_masking_policy
, snowflake_network_rule
, snowflake_password_policy
and snowflake_table
Because of introducing a new fully_qualified_name
field for all of the resources, qualified_name
was removed from snowflake_masking_policy
, snowflake_network_rule
, snowflake_password_policy
and snowflake_table
. Please adjust your configurations. State is automatically migrated.
Correctly handle the situation when stage was rename/deleted externally (earlier it resulted in a permanent loop). No action is required on the user's side.
Connected issues: #2972
Data types are not entirely correctly handled inside the provider (read more e.g. in #2735). It will be still improved with the upcoming function, procedure, and table rework. Currently, diff suppression was fixed for text and number data types in the table resource with the following assumptions/limitations:
- for numbers the default precision is 38 and the default scale is 0 (following the docs)
- for number types the following types are treated as synonyms:
NUMBER
,DECIMAL
,NUMERIC
,INT
,INTEGER
,BIGINT
,SMALLINT
,TINYINT
,BYTEINT
- for text the default length is 16777216 (following the docs)
- for text types the following types are treated as synonyms:
VARCHAR
,CHAR
,CHARACTER
,STRING
,TEXT
- whitespace and casing is ignored
- if the type arguments cannot be parsed the defaults are used and therefore diff may be suppressed unexpectedly (please report such cases)
No action is required on the user's side.
Connected issues: #3007
Because of the multiple changes in the resource, the easiest migration way is to follow our migration guide to perform zero downtime migration. Alternatively, it is possible to follow some pointers below. Either way, familiarize yourself with the resource changes before version bumping. Also, check the design decisions.
On our road to V1 we changed the approach to Snowflake parameters on the object level; now, we add them directly to the resource. This is a breaking change because now:
- Leaving the config empty does not set the default value on the object level but uses the one from hierarchy on Snowflake level instead (so after version bump, the diff running
UNSET
statements is expected). - This change is not compatible with
snowflake_object_parameter
- you have to set the parameter insidesnowflake_user
resource IF you manage users through terraform AND you want to set the parameter on the user level.
For more details, check the Snowflake parameters.
The following set of parameters was added to the snowflake_user
resource:
- ABORT_DETACHED_QUERY
- AUTOCOMMIT
- BINARY_INPUT_FORMAT
- BINARY_OUTPUT_FORMAT
- CLIENT_MEMORY_LIMIT
- CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX
- CLIENT_PREFETCH_THREADS
- CLIENT_RESULT_CHUNK_SIZE
- CLIENT_RESULT_COLUMN_CASE_INSENSITIVE
- CLIENT_SESSION_KEEP_ALIVE
- CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY
- CLIENT_TIMESTAMP_TYPE_MAPPING
- DATE_INPUT_FORMAT
- DATE_OUTPUT_FORMAT
- ENABLE_UNLOAD_PHYSICAL_TYPE_OPTIMIZATION
- ERROR_ON_NONDETERMINISTIC_MERGE
- ERROR_ON_NONDETERMINISTIC_UPDATE
- GEOGRAPHY_OUTPUT_FORMAT
- GEOMETRY_OUTPUT_FORMAT
- JDBC_TREAT_DECIMAL_AS_INT
- JDBC_TREAT_TIMESTAMP_NTZ_AS_UTC
- JDBC_USE_SESSION_TIMEZONE
- JSON_INDENT
- LOCK_TIMEOUT
- LOG_LEVEL
- MULTI_STATEMENT_COUNT
- NOORDER_SEQUENCE_AS_DEFAULT
- ODBC_TREAT_DECIMAL_AS_INT
- QUERY_TAG
- QUOTED_IDENTIFIERS_IGNORE_CASE
- ROWS_PER_RESULTSET
- S3_STAGE_VPCE_DNS_NAME
- SEARCH_PATH
- SIMULATED_DATA_SHARING_CONSUMER
- STATEMENT_QUEUED_TIMEOUT_IN_SECONDS
- STATEMENT_TIMEOUT_IN_SECONDS
- STRICT_JSON_OUTPUT
- TIMESTAMP_DAY_IS_ALWAYS_24H
- TIMESTAMP_INPUT_FORMAT
- TIMESTAMP_LTZ_OUTPUT_FORMAT
- TIMESTAMP_NTZ_OUTPUT_FORMAT
- TIMESTAMP_OUTPUT_FORMAT
- TIMESTAMP_TYPE_MAPPING
- TIMESTAMP_TZ_OUTPUT_FORMAT
- TIMEZONE
- TIME_INPUT_FORMAT
- TIME_OUTPUT_FORMAT
- TRACE_LEVEL
- TRANSACTION_ABORT_ON_ERROR
- TRANSACTION_DEFAULT_ISOLATION_LEVEL
- TWO_DIGIT_CENTURY_START
- UNSUPPORTED_DDL_ACTION
- USE_CACHED_RESULT
- WEEK_OF_YEAR_POLICY
- WEEK_START
- ENABLE_UNREDACTED_QUERY_SYNTAX_ERROR
- NETWORK_POLICY
- PREVENT_UNLOAD_TO_INTERNAL_STAGES
Connected issues: #2938
According to https://docs.snowflake.com/en/sql-reference/functions/all_user_names#usage-notes, NAME
s are not considered sensitive data and LOGIN_NAME
s are. Previous versions of the provider had this the other way around. In this version, name
attribute was unmarked as sensitive, whereas login_name
was marked as sensitive. This may break your configuration if you were using login_name
s before e.g. in a for_each
loop.
The display_name
attribute was marked as sensitive. It defaults to name
if not provided on Snowflake side. Because name
is no longer sensitive, we also change the setting for the display_name
.
Connected issues: #2662, #2668.
During the identifiers rework, we generalized how we compute the differences correctly for the identifier fields (read more in this document). Proper suppressor was applied to default_warehouse
, default_namespace
, and default_role
. Also, all these three attributes were corrected (e.g. handling spaces/hyphens in names).
Connected issues: #2836, #2942
Not every attribute can be updated in the state during read (like password
in the snowflake_user
resource). In situations where update fails, we may end up with an incorrect state (read more in hashicorp/terraform-plugin-sdk#476). We use a deprecated method from the plugin SDK, and now, for partially failed updates, we preserve the resource's previous state. It fixed this kind of situations for snowflake_user
resource.
Connected issues: #2970
Old field default_secondary_roles
was removed in favour of the new, easier, default_secondary_roles_option
because the only possible options that can be currently set are ('ALL')
and ()
. The logic to handle set element changes was convoluted and error-prone. Additionally, bcr 2024_07 complicated the matter even more.
Now:
- the default value is
DEFAULT
- it falls back to Snowflake default (so()
before and('ALL')
after the BCR) - to explicitly set to
('ALL')
useALL
- to explicitly set to
()
useNONE
While migrating, the old default_secondary_roles
will be removed from the state automatically and default_secondary_roles_option
will be constructed based on the previous value (in some cases apply may be necessary).
Connected issues: #3038
Attributes that are no longer computed:
login_name
display_name
disabled
default_role
New fields:
middle_name
days_to_expiry
mins_to_unlock
mins_to_bypass_mfa
disable_mfa
default_secondary_roles_option
show_output
- holds the response fromSHOW USERS
. Remember that the field will be only recomputed if one of the user attributes is changed.parameters
- holds the response fromSHOW PARAMETERS IN USER
.
Removed fields:
has_rsa_public_key
default_secondary_roles
- replaced withdefault_secondary_roles_option
Default changes:
must_change_password
disabled
Type changes:
must_change_password
: bool -> string (To easily handle three-value logic (true, false, unknown) in provider's configs, read more in )disabled
: bool -> string (To easily handle three-value logic (true, false, unknown) in provider's configs, read more in )
IMPORTANT NOTE: when querying users you don't have permissions to, the querying options are limited. You won't get almost any field in
show_output
(only empty or default values), the DESCRIBE command cannot be called, so you have to setwith_describe = false
. Onlyparameters
output is not affected by the lack of privileges.
Changes:
- account checking logic was entirely removed
pattern
renamed tolike
like
,starts_with
, andlimit
filters addedSHOW USERS
output is enclosed inshow_output
field insideusers
(all the previous fields inusers
map were removed)- Added outputs from DESC USER and SHOW PARAMETERS IN USER (they can be turned off by declaring
with_describe = false
andwith_parameters = false
, they're turned on by default). The additional parameters call DESC USER (withwith_describe
turned on) and SHOW PARAMETERS IN USER (withwith_parameters
turned on) per user returned by SHOW USERS. The outputs of both commands are held inusers
entry, where DESC USER is saved in thedescribe_output
field, and SHOW PARAMETERS IN USER in theparameters
field. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
Connected issues: #2902
snowflake_user_public_keys
is a resource allowing to set keys for the given user. Before this version, it was possible to have snowflake_user
and snowflake_user_public_keys
used next to each other.
Because the logic handling the keys in snowflake_user
was fixed, it is advised to use snowflake_user_public_keys
only when user is not managed through terraform. Having both resources configured for the same user will result in improper behavior.
To migrate, in case of having two resources:
- copy the keys to
rsa_public_key
andrsa_public_key2
insnowflake_user
- remove
snowflake_user_public_keys
from state (following https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/docs/technical-documentation/resource_migration.md#resource-migration) - remove
snowflake_user_public_keys
from config
snowflake_user_password_policy_attachment
is not addressed in the current version.
Attaching other user policies is not addressed in the current version.
Both topics will be addressed in the following versions.
service
and legacy_service
user types are currently not supported. They will be supported in the following versions as separate resources (namely snowflake_service_user
and snowflake_legacy_service_user
).
In order to avoid dropping PUBLIC
schemas, we have decided to use ALTER
instead of OR REPLACE
during creation. In the future we are planning to use CREATE OR ALTER
when it becomes available for schems.
In order to fix issues in v0.93.0, when a resource has Azure scim client, sync_password
field is now set to default
value in the state. State will be migrated automatically.
Renamed fields:
- renamed
is_managed
towith_managed_access
- renamed
data_retention_days
todata_retention_time_in_days
Please rename these fields in your configuration files. State will be migrated automatically.
Removed fields:
tag
The value of this field will be removed from the state automatically. Please, use tag_association instead.
New fields:
- the following set of parameters was added:
max_data_extension_time_in_days
external_volume
catalog
replace_invalid_characters
default_ddl_collation
storage_serialization_policy
log_level
trace_level
suspend_task_after_num_failures
task_auto_retry_attempts
user_task_managed_initial_warehouse_size
user_task_timeout_ms
user_task_minimum_trigger_interval_in_seconds
quoted_identifiers_ignore_case
enable_console_output
pipe_execution_paused
- added
show_output
field that holds the response from SHOW SCHEMAS. - added
describe_output
field that holds the response from DESCRIBE SCHEMA. Note that one needs to grant sufficient privileges e.g. with grant_ownership on all objects in the schema. Otherwise, this field is not filled. - added
parameters
field that holds the response from SHOW PARAMETERS IN SCHEMA.
We allow creating and managing PUBLIC
schemas now. When the name of the schema is PUBLIC
, it's created with OR_REPLACE
. Please be careful with this operation, because you may experience data loss. OR_REPLACE
does DROP
before CREATE
, so all objects in the schema will be dropped and this is not visible in Terraform plan. To restore data-related objects that might have been accidentally or intentionally deleted, pleas read about Time Travel. The alternative is to import PUBLIC
schema manually and then manage it with Terraform. We've decided this based on #2826.
To easily handle three-value logic (true, false, unknown) in provider's configs, type of is_transient
and with_managed_access
was changed from boolean to string.
Terraform should recreate resources for configs lacking is_transient
(DROP
and then CREATE
will be run underneath). To prevent this behavior, please set is_transient
field.
For more details about default values, please refer to the changes before v1 document.
Terraform should perform an action for configs lacking with_managed_access
(ALTER SCHEMA DISABLE MANAGED ACCESS
will be run underneath which should not affect the Snowflake object, because MANAGED ACCESS
is not set by default)
Changes:
database
is removed and can be specified insidein
field.like
,in
,starts_with
, andlimit
fields enable filtering.- SHOW SCHEMAS output is enclosed in
show_output
field insideschemas
. - Added outputs from DESC SCHEMA and SHOW PARAMETERS IN SCHEMA (they can be turned off by declaring
with_describe = false
andwith_parameters = false
, they're turned on by default). The additional parameters call DESC SCHEMA (withwith_describe
turned on) and SHOW PARAMETERS IN SCHEMA (withwith_parameters
turned on) per schema returned by SHOW SCHEMAS. The outputs of both commands are held inschemas
entry, where DESC SCHEMA is saved in thedescribe_output
field, and SHOW PARAMETERS IN SCHEMA in theparameters
field. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
Already existing snowflake_role
was deprecated in favor of the new snowflake_account_role
. The old resource got upgraded to
have the same features as the new one. The only difference is the deprecation message on the old resource.
New fields:
- added
show_output
field that holds the response from SHOW ROLES. Remember that the field will be only recomputed if one of the fields (name
orcomment
) are changed.
Changes:
- New
in_class
filtering option to filter out roles by class name, e.g.in_class = "SNOWFLAKE.CORE.BUDGET"
pattern
was renamed tolike
- output of SHOW is enclosed in
show_output
, so before, e.g.roles.0.comment
is nowroles.0.show_output.0.comment
Already existing snowflake_role
was deprecated in favor of the new snowflake_account_role
. The old resource got upgraded to
have the same features as the new one. The only difference is the deprecation message on the old resource.
New fields:
- added
show_output
field that holds the response from SHOW ROLES. Remember that the field will be only recomputed if one of the fields (name
orcomment
) are changed.
Changes:
- New
in_class
filtering option to filter out roles by class name, e.g.in_class = "SNOWFLAKE.CORE.BUDGET"
pattern
was renamed tolike
- output of SHOW is enclosed in
show_output
, so before, e.g.roles.0.comment
is nowroles.0.show_output.0.comment
Added a new resource for managing streamlits. See reference docs. In this resource, we decided to split ROOT_LOCATION
in Snowflake to two fields: stage
representing stage fully qualified name and directory_location
containing a path within this stage to root location.
Added a new datasource enabling querying and filtering stremlits. Notes:
- all results are stored in
streamlits
field. like
,in
, andlimit
fields enable streamlits filtering.- SHOW STREAMLITS output is enclosed in
show_output
field insidestreamlits
. - Output from DESC STREAMLIT (which can be turned off by declaring
with_describe = false
, it's turned on by default) is enclosed indescribe_output
field insidestreamlits
. The additional parameters call DESC STREAMLIT (withwith_describe
turned on) per streamlit returned by SHOW STREAMLITS. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
No migration required.
New behavior:
name
is no longer marked as ForceNew parameter. When changed, now it will perform ALTER RENAME operation, instead of re-creating with the new name.- Additional validation was added to
blocked_ip_list
to inform about specifying0.0.0.0/0
ip. More details in the official documentation.
New fields:
show_output
anddescribe_output
added to hold the results returned bySHOW
andDESCRIBE
commands. Those fields will only be recomputed when specified fields change
Added a new datasource enabling querying and filtering network policies. Notes:
- all results are stored in
network_policies
field. like
field enables filtering.- SHOW NETWORK POLICIES output is enclosed in
show_output
field insidenetwork_policies
. - Output from DESC NETWORK POLICY (which can be turned off by declaring
with_describe = false
, it's turned on by default) is enclosed indescribe_output
field insidenetwork_policies
. The additional parameters call DESC NETWORK POLICY (withwith_describe
turned on) per network policy returned by SHOW NETWORK POLICIES. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
Because of the issue #2948, we are relaxing the validations for the Snowflake parameter values. Read more in CHANGES_BEFORE_V1.md.
With this change we introduce the first resources redesigned for the V1. We have made a few design choices that will be reflected in these and in the further reworked resources. This includes:
- Handling the default values.
- Handling the "empty" values.
- Handling the Snowflake parameters.
- Saving the config values in the state.
- Providing a "raw Snowflake output" for the managed resources.
They are all described in short in the changes before v1 doc. Please familiarize yourself with these changes before the upgrade.
Following the announcement we have removed the old grant resources. The two resources snowflake_role_ownership_grant and snowflake_user_ownership_grant were not listed in the announcement, but they were also marked as deprecated ones. We are removing them too to conclude the grants redesign saga.
Added new api authentication resources, i.e.:
snowflake_api_authentication_integration_with_authorization_code_grant
snowflake_api_authentication_integration_with_client_credentials
snowflake_api_authentication_integration_with_jwt_bearer
See reference doc.
(new feature) snowflake_oauth_integration_for_custom_clients and snowflake_oauth_integration_for_partner_applications resources
To enhance clarity and functionality, the new resources snowflake_oauth_integration_for_custom_clients
and snowflake_oauth_integration_for_partner_applications
have been introduced
to replace the previous snowflake_oauth_integration
. Recognizing that the old resource carried multiple responsibilities within a single entity, we opted to divide it into two more specialized resources.
The newly introduced resources are aligned with the latest Snowflake documentation at the time of implementation, and adhere to our new conventions.
This segregation was based on the oauth_client
attribute, where CUSTOM
corresponds to snowflake_oauth_integration_for_custom_clients
,
while other attributes align with snowflake_oauth_integration_for_partner_applications
.
Added a new datasource enabling querying and filtering all types of security integrations. Notes:
- all results are stored in
security_integrations
field. like
field enables security integrations filtering.- SHOW SECURITY INTEGRATIONS output is enclosed in
show_output
field insidesecurity_integrations
. - Output from DESC SECURITY INTEGRATION (which can be turned off by declaring
with_describe = false
, it's turned on by default) is enclosed indescribe_output
field insidesecurity_integrations
. DESC SECURITY INTEGRATION returns different properties based on the integration type. Consult the documentation to check which ones will be filled for which integration. The additional parameters call DESC SECURITY INTEGRATION (withwith_describe
turned on) per security integration returned by SHOW SECURITY INTEGRATIONS. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
Renamed fields:
type
toexternal_oauth_type
issuer
toexternal_oauth_issuer
token_user_mapping_claims
toexternal_oauth_token_user_mapping_claim
snowflake_user_mapping_attribute
toexternal_oauth_snowflake_user_mapping_attribute
scope_mapping_attribute
toexternal_oauth_scope_mapping_attribute
jws_keys_urls
toexternal_oauth_jws_keys_url
rsa_public_key
toexternal_oauth_rsa_public_key
rsa_public_key_2
toexternal_oauth_rsa_public_key_2
blocked_roles
toexternal_oauth_blocked_roles_list
allowed_roles
toexternal_oauth_allowed_roles_list
audience_urls
toexternal_oauth_audience_list
any_role_mode
toexternal_oauth_any_role_mode
scope_delimiter
toexternal_oauth_scope_delimiter
to align with Snowflake docs. Please rename this field in your configuration files. State will be migrated automatically.
Conditional force new was added for the following attributes when they are removed from config. There are no alter statements supporting UNSET on these fields.
external_oauth_rsa_public_key
external_oauth_rsa_public_key_2
external_oauth_scope_mapping_attribute
external_oauth_jws_keys_url
external_oauth_token_user_mapping_claim
Fields listed below can not be set at the same time in Snowflake. They are marked as conflicting fields.
external_oauth_jws_keys_url
<->external_oauth_rsa_public_key
external_oauth_jws_keys_url
<->external_oauth_rsa_public_key_2
external_oauth_allowed_roles_list
<->external_oauth_blocked_roles_list
The fields listed below had diff suppress which removed '-' from strings. Now, this behavior is removed, so if you had '-' in these strings, please remove them. Note that '-' in these values is not allowed by Snowflake.
external_oauth_snowflake_user_mapping_attribute
external_oauth_type
external_oauth_any_role_mode
The new snowflake_saml2_integration
is introduced and deprecates snowflake_saml_integration
. It contains new fields
and follows our new conventions making it more stable. The old SAML integration wasn't changed, so no migration needed,
but we recommend to eventually migrate to the newer counterpart.
Now, the sync_password
field will set the state value to default
whenever the value is not set in the config. This indicates that the value on the Snowflake side is set to the Snowflake default.
Warning
This change causes issues for Azure scim client (see #2946). The workaround is to remove the resource from the state with terraform state rm
, add sync_password = true
to the config, and import with terraform import "snowflake_scim_integration.test" "aad_provisioning"
. After these steps, there should be no errors and no diff on this field. This behavior is fixed in v0.94 with state upgrader.
Renamed field provisioner_role
to run_as_role
to align with Snowflake docs. Please rename this field in your configuration files. State will be migrated automatically.
Fields added to the resource:
enabled
sync_password
comment
New field enabled
is required. Previously the default value during create in Snowflake was true
. If you created a resource with Terraform, please add enabled = true
to have the same value.
ForceNew was added for the following attributes (because there are no usable SQL alter statements for them):
scim_client
run_as_role
Because of the multiple changes in the resource, the easiest migration way is to follow our migration guide to perform zero downtime migration. Alternatively, it is possible to follow some pointers below. Either way, familiarize yourself with the resource changes before version bumping. Also, check the design decisions.
As part of the redesign we are removing the default values for attributes having their defaults on Snowflake side to reduce coupling with the provider (read more in default values). Because of that the following defaults were removed:
comment
(previously""
)enable_query_acceleration
(previouslyfalse
)query_acceleration_max_scale_factor
(previously8
)warehouse_type
(previously"STANDARD"
)max_concurrency_level
(previously8
)statement_queued_timeout_in_seconds
(previously0
)statement_timeout_in_seconds
(previously172800
)
Beware! For attributes being Snowflake parameters (in case of warehouse: max_concurrency_level
, statement_queued_timeout_in_seconds
, and statement_timeout_in_seconds
), this is a breaking change (read more in Snowflake parameters). Previously, not setting a value for them was treated as a fallback to values hardcoded on the provider side. This caused warehouse creation with these parameters set on the warehouse level (and not using the Snowflake default from hierarchy; read more in the parameters documentation). To keep the previous values, fill in your configs to the default values listed above.
All previous defaults were aligned with the current Snowflake ones, however it's not possible to distinguish between filled out value and no value in the automatic state upgrader. Therefore, if the given attribute is not filled out in your configuration, terraform will try to perform update after the change (to UNSET the given attribute to the Snowflake default); it should result in no changes on Snowflake object side, but it is required to make Terraform state aligned with your config. All other optional fields that were not set inside the config at all (because of the change in handling state logic on our provider side) will follow the same logic. To avoid the need for the changes, fill out the default fields in your config. Alternatively, run terraform apply
; no further changes should be shown as a part of the plan.
There are three migrations that should happen automatically with the version bump:
- incorrect
2XLARGE
,3XLARGE
,4XLARGE
,5XLARGE
,6XLARGE
values for warehouse size are changed to the proper ones - deprecated
wait_for_provisioning
attribute is removed from the state - old empty resource monitor attribute is cleaned (earlier it was set to
"null"
string)
Before the changes, removing warehouse size from the config was not handled properly. Because UNSET is not supported for warehouse size (check the docs - usage notes for unset) and there are multiple defaults possible, removing the size from config will result in the resource recreation.
As part of the redesign we are adjusting validations or removing them to reduce coupling between Snowflake and the provider. Because of that the following validations were removed/adjusted/added:
max_cluster_count
- adjusted: added higher bound (10) according to Snowflake docsmin_cluster_count
- adjusted: added higher bound (10) according to Snowflake docsauto_suspend
- adjusted: added0
as valid valuewarehouse_size
- adjusted: removed incorrect2XLARGE
,3XLARGE
,4XLARGE
,5XLARGE
,6XLARGE
valuesresource_monitor
- added: validation for a valid identifier (still subject to change during identifiers rework)max_concurrency_level
- added: validation according to MAX_CONCURRENCY_LEVEL parameter docsstatement_queued_timeout_in_seconds
- added: validation according to STATEMENT_QUEUED_TIMEOUT_IN_SECONDS parameter docsstatement_timeout_in_seconds
- added: validation according to STATEMENT_TIMEOUT_IN_SECONDS parameter docs
wait_for_provisioning
field was deprecated a long time ago. It's high time it was removed from the schema.
Previously, the query_acceleration_max_scale_factor
was depending on enable_query_acceleration
parameter, but it is not required on Snowflake side. After migration, terraform plan
should suggest changes if enable_query_acceleration
was earlier set to false (manually or from default) and if query_acceleration_max_scale_factor
was set in config.
Previously, the initially_suspended
attribute change caused the resource recreation. This attribute is used only during creation (to create suspended warehouse). There is no reason to recreate the whole object just to have initial state changed.
To easily handle three-value logic (true, false, unknown) in provider's configs, type of auto_resume
and enable_query_acceleration
was changed from boolean to string. This should not require updating existing configs (boolean/int value should be accepted and state will be migrated to string automatically), however we recommend changing config values to strings. Terraform should perform an action for configs lacking auto_resume
or enable_query_acceleration
(ALTER WAREHOUSE UNSET AUTO_RESUME
and/or ALTER WAREHOUSE UNSET ENABLE_QUERY_ACCELERATION
will be run underneath which should not affect the Snowflake object, because auto_resume
and enable_query_acceleration
are false by default).
resource_monitor
is an identifier and handling logic may be still slightly changed as part of https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/ROADMAP.md#identifiers-rework. It should be handled automatically (without needed manual actions on user side), though, but it is not guaranteed.
- Added
like
field to enable warehouse filtering - Added missing fields returned by SHOW WAREHOUSES and enclosed its output in
show_output
field. - Added outputs from DESC WAREHOUSE and SHOW PARAMETERS IN WAREHOUSE (they can be turned off by declaring
with_describe = false
andwith_parameters = false
, they're turned on by default). The additional parameters call DESC WAREHOUSE (withwith_describe
turned on) and SHOW PARAMETERS IN WAREHOUSE (withwith_parameters
turned on) per warehouse returned by SHOW WAREHOUSES. The outputs of both commands are held inwarehouses
entry, where DESC WAREHOUSE is saved in thedescribe_output
field, and SHOW PARAMETERS IN WAREHOUSE in theparameters
field. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
You can read more in "raw Snowflake output".
As part of the preparation for v1, we split up the database resource into multiple ones:
- Standard database - can be used as
snowflake_database
(replaces the old one and is used to create databases with optional ability to become a primary database ready for replication) - Shared database - can be used as
snowflake_shared_database
(used to create databases from externally defined shares) - Secondary database - can be used as
snowflake_secondary_database
(used to create replicas of databases from external sources)
All the field changes in comparison to the previous database resource are:
is_transient
- in
snowflake_shared_database
- removed: the field is removed from
snowflake_shared_database
as it doesn't have any effect on shared databases.
- removed: the field is removed from
- in
from_database
- database cloning was entirely removed and is not possible by any of the new database resources.from_share
- the parameter was moved to the dedicated resource for databases created from sharessnowflake_shared_database
. Right now, it's a text field instead of a map. Additionally, instead of legacy account identifier format we're expecting the new one that with share looks like this:<organization_name>.<account_name>.<share_name>
. For more information on account identifiers, visit the official documentation.from_replication
- the parameter was moved to the dedicated resource for databases created from primary databasessnowflake_secondary_database
replication_configuration
- renamed: was renamed toconfiguration
and is only available in thesnowflake_database
. Its internal schema changed that instead of list of accounts, we expect a list of nested objects with accounts for which replication (and optionally failover) should be enabled. More information about converting between both versions here. Additionally, instead of legacy account identifier format we're expecting the new one that looks like this:<organization_name>.<account_name>
(it will be automatically migrated to the recommended format by the state upgrader). For more information on account identifiers, visit the official documentation.data_retention_time_in_days
- in
snowflake_shared_database
- removed: the field is removed from
snowflake_shared_database
as it doesn't have any effect on shared databases.
- removed: the field is removed from
- in
snowflake_database
andsnowflake_secondary_database
- adjusted: now, it uses different approach that won't set it to -1 as a default value, but rather fills the field with the current value from Snowflake (this still can change).
- in
- added: The following set of parameters was added to every database type:
max_data_extension_time_in_days
external_volume
catalog
replace_invalid_characters
default_ddl_collation
storage_serialization_policy
log_level
trace_level
suspend_task_after_num_failures
task_auto_retry_attempts
user_task_managed_initial_warehouse_size
user_task_timeout_ms
user_task_minimum_trigger_interval_in_seconds
quoted_identifiers_ignore_case
enable_console_output
The split was done (and will be done for several objects during the refactor) to simplify the resource on maintainability and usage level. Its purpose was also to divide the resources by their specific purpose rather than cramping every use case of an object into one resource.
We made a decision to use the existing snowflake_database
resource for redesigning it into a standard database.
The previous snowflake_database
was renamed to snowflake_database_old
and the current snowflake_database
contains completely new implementation that follows our guidelines we set for V1.
When upgrading to the 0.93.0 version, the automatic state upgrader should cover the migration for databases that didn't have the following fields set:
from_share
(now, the newsnowflake_shared_database
should be used instead)from_replica
(now, the newsnowflake_secondary_database
should be used instead)replication_configuration
For configurations containing replication_configuraiton
like this one:
resource "snowflake_database" "test" {
name = "<name>"
replication_configuration {
accounts = ["<account_locator>", "<account_locator_2>"]
ignore_edition_check = true
}
}
You have to transform the configuration into the following format (notice the change from account locator into the new account identifier format):
resource "snowflake_database" "test" {
name = "%s"
replication {
enable_to_account {
account_identifier = "<organization_name>.<account_name>"
with_failover = false
}
enable_to_account {
account_identifier = "<organization_name_2>.<account_name_2>"
with_failover = false
}
}
ignore_edition_check = true
}
If you had from_database
set, you should follow our resource migration guide to remove
the database from state to later import it in the newer version of the provider.
Otherwise, it may cause issues when migrating to v0.93.0.
For now, we're dropping the possibility to create a clone database from other databases.
The only way will be to clone a database manually and import it as snowflake_database
, but if
cloned databases diverge in behavior from standard databases, it may cause issues.
For databases with one of the fields mentioned above, manual migration will be needed. Please refer to our migration guide to perform zero downtime migration.
If you would like to upgrade to the latest version and postpone the upgrade, you still have to perform the manual migration
to the snowflake_database_old
resource by following the zero downtime migrations document.
The only difference would be that instead of writing/generating new configurations you have to just rename the existing ones to contain _old
suffix.
terse
andhistory
fields were removed.replication_configuration
field was removed fromdatabases
.pattern
was replaced bylike
field.- Additional filtering options added (
limit
). - Added missing fields returned by SHOW DATABASES and enclosed its output in
show_output
field. - Added outputs from DESC DATABASE and SHOW PARAMETERS IN DATABASE (they can be turned off by declaring
with_describe = false
andwith_parameters = false
, they're turned on by default). The additional parameters call DESC DATABASE (withwith_describe
turned on) and SHOW PARAMETERS IN DATABASE (withwith_parameters
turned on) per database returned by SHOW DATABASES. The outputs of both commands are held indatabases
entry, where DESC DATABASE is saved in thedescribe_output
field, and SHOW PARAMETERS IN DATABASE in theparameters
field. It's important to limit the records and calls to Snowflake to the minimum. That's why we recommend assessing which information you need from the data source and then providing strong filters and turning off additional fields for better plan performance.
While solving issue #2733 we have introduced diff suppression for column.type
. To make it work correctly we have also added a validation to it. It should not cause any problems, but it's worth noting in case of any data types used that the provider is not aware of.
Diff suppression for arguments.type
is needed for the same reason as above for snowflake_table
resource.
Now the tag_masking_policy_association
resource will only accept fully qualified names separated by dot .
instead of pipe |
.
Before
resource "snowflake_tag_masking_policy_association" "name" {
tag_id = snowflake_tag.this.id
masking_policy_id = snowflake_masking_policy.example_masking_policy.id
}
After
resource "snowflake_tag_masking_policy_association" "name" {
tag_id = "\"${snowflake_tag.this.database}\".\"${snowflake_tag.this.schema}\".\"${snowflake_tag.this.name}\""
masking_policy_id = "\"${snowflake_masking_policy.example_masking_policy.database}\".\"${snowflake_masking_policy.example_masking_policy.schema}\".\"${snowflake_masking_policy.example_masking_policy.name}\""
}
It's more verbose now, but after identifier rework it should be similar to the previous form.
The ForceNew
field was removed in favor of in-place Update for name
parameter in:
snowflake_file_format
snowflake_masking_policy
So from now, these objects won't be re-created when thename
changes, but instead only the name will be updated withALTER .. RENAME TO
statements.
From now on, the snowflake_procedure
's execute_as
parameter allows only two values: OWNER and CALLER (case-insensitive). Setting other values earlier resulted in falling back to the Snowflake default (currently OWNER) and creating a permadiff.
snowflake_grants
datasource was refreshed as part of the ongoing Grants Redesign.
To be aligned with the convention in other grant resources, role
was renamed to account_role
for the following fields:
grants_to.role
grants_of.role
future_grants_to.role
.
To migrate simply change role
to account_role
in the aforementioned fields.
grants_to.share
was a text field. Because Snowflake introduced new syntax SHOW GRANTS TO SHARE <share_name> IN APPLICATION PACKAGE <app_package_name>
(check more in the docs) the type was changed to object. To migrate simply change:
data "snowflake_grants" "example_to_share" {
grants_to {
share = "some_share"
}
}
to
data "snowflake_grants" "example_to_share" {
grants_to {
share {
share_name = "some_share"
}
}
}
Note: in_application_package
is not yet supported.
future_grants_in.schema
was an object field allowing to set required schema_name
and optional database_name
. Our strategy is to be explicit, so the schema field was changed to string and fully qualified name is expected. To migrate change:
data "snowflake_grants" "example_future_in_schema" {
future_grants_in {
schema {
database_name = "some_database"
schema_name = "some_schema"
}
}
}
to
data "snowflake_grants" "example_future_in_schema" {
future_grants_in {
schema = "\"some_database\".\"some_schema\""
}
}
grants_to
was enriched with three new options:
application
application_role
database_role
No migration work is needed here.
grants_to
was enriched with two new options:
database_role
application_role
No migration work is needed here.
future_grants_to
was enriched with one new option:
database_role
No migration work is needed here.
Descriptions of attributes were altered. More examples were added (both for old and new features).
Previously, in snowflake_database
when creating a database form share, it was possible to provide from_share.provider
in the format of <org_name>.<account_name>
. It worked even though we expected account locator because our "external" identifier wasn't quoting its string representation.
To be consistent with other identifier types, we quoted the output of "external" identifiers which makes such configurations break
(previously, they were working "by accident"). To fix it, the previous format of <org_name>.<account_name>
has to be changed
to account locator format <account_locator>
(mind that it's now case-sensitive). The account locator can be retrieved by calling select current_account();
on the sharing account.
In the future we would like to eventually come back to the <org_name>.<account_name>
format as it's recommended by Snowflake.
There were several issues reported about the configuration hierarchy, e.g. #2294 and #2242. In fact, the order of precedence described in the docs was not followed. This have led to the incorrect behavior.
After migrating to this version, the hierarchy from the docs should be followed:
The Snowflake provider will use the following order of precedence when determining which credentials to use:
1) Provider Configuration
2) Environment Variables
3) Config File
BEWARE: your configurations will be affected with that change because they may have been leveraging the incorrect configurations precedence. Please be sure to check all the configurations before running terraform.
Longer context in #2517.
After this change, one apply may be required to update the state correctly for failover group resources using ACCOUNT PARAMETERS
.
(behavior change) Database data_retention_time_in_days
+ Schema data_retention_days
+ Table data_retention_time_in_days
For context #2356.
To make data retention fields truly optional (previously they were producing plan every time when no value was set),
we added -1
as a possible value, and it is set as default. That got rid of the unexpected plans when no value is set and added possibility to use default value assigned by Snowflake (see the data retention period).
For context #2356.
To define data retention days for table data_retention_time_in_days
should be used as deprecated data_retention_days
field is being removed.
The type
of the constraint was limited back to UNIQUE
, PRIMARY KEY
, and FOREIGN KEY
.
The reason for that is, that syntax for Out-of-Line constraint (docs) does not contain NOT NULL
.
It is noted as a behavior change but in some way it is not; with the previous implementation it did not work at all with type
set to NOT NULL
because the generated statement was not a valid Snowflake statement.
We will consider adding NOT NULL
back because it can be set by ALTER COLUMN columnX SET NOT NULL
, but first we want to revisit the whole resource design.
The docs were inconsistent. Example prior to 0.86.0 version showed using the table.id
as the table_id
reference. The description of the table_id
parameter never allowed such a value (table.id
is a |
-delimited identifier representation and only the .
-separated values were listed in the docs: https://registry.terraform.io/providers/Snowflake-Labs/snowflake/0.85.0/docs/resources/table_constraint#required. The misuse of table.id
parameter will result in error after migrating to 0.86.0. To make the config work, please remove and reimport the constraint resource from the state as described in resource migration doc.
After discussions in #2535 we decided to provide a temporary workaround in 0.87.0 version, so that the manual migration is not necessary. It allows skipping the migration and jumping straight to 0.87.0 version. However, the temporary workaround will be gone in one of the future versions. Please adjust to the newly suggested reference with the new resources you create.
The return_null_allowed
attribute default value is now true
. This is a behavior change because it was false
before. The reason it was changed is to match the expected default value in the documentation Default: The default is NULL (i.e. the function can return NULL values).
The comment
attribute is now optional. It was required before, but it is not required in Snowflake API.
The schema
attribute is now required with database
attribute to match old implementation SHOW EXTERNAL FUNCTIONS IN SCHEMA "<database>"."<schema>"
. In the future this may change to make schema optional.
In recent changes, we introduced a new grant resources to replace the old ones. To aid with the migration, we wrote a guide to show one of the possible ways to migrate deprecated resources to their new counter-parts. As the guide is more general and applies to every version (and provider), we moved it here.
return_behavior
parameter is deprecated because it is also deprecated in the Snowflake API.
return_type
has become force new because there is no way to alter it without dropping and recreating the function.
Setting copy_options
to ON_ERROR = 'CONTINUE'
would result in a permadiff. Use ON_ERROR = CONTINUE
(without single quotes) or bump to v0.89.0 in which the behavior was fixed.
notification_provider
becomes required and has three possible values AZURE_STORAGE_QUEUE
, AWS_SNS
, and GCP_PUBSUB
.
It is still possible to set it to AWS_SQS
but because there is no underlying SQL, so it will result in an error.
Attributes aws_sqs_arn
and aws_sqs_role_arn
will be ignored.
Computed attributes aws_sqs_external_id
and aws_sqs_iam_user_arn
won't be updated.
Force new was added for the following attributes (because no usable SQL alter statements for them):
azure_storage_queue_primary_uri
azure_tenant_id
gcp_pubsub_subscription_name
gcp_pubsub_topic_name
direction
parameter is deprecated because it is added automatically on the SDK level.
type
parameter is deprecated because it is added automatically on the SDK level (and basically it's always QUEUE
).
In this change we have done a provider refactor to make it more complete and customizable by supporting more options that were already available in Golang Snowflake driver. This lead to several attributes being added and a few deprecated. We will focus on the deprecated ones and show you how to adapt your current configuration to the new changes.
provider "snowflake" {
# before
username = "username"
# after
user = "username"
}
provider "snowflake" {
# before
browser_auth = false
oauth_access_token = "<access_token>"
oauth_refresh_token = "<refresh_token>"
oauth_client_id = "<client_id>"
oauth_client_secret = "<client_secret>"
oauth_endpoint = "<endpoint>"
oauth_redirect_url = "<redirect_uri>"
# after
authenticator = "ExternalBrowser"
token = "<access_token>"
token_accessor {
refresh_token = "<refresh_token>"
client_id = "<client_id>"
client_secret = "<client_secret>"
token_endpoint = "<endpoint>"
redirect_uri = "<redirect_uri>"
}
}
Specifying a region is a legacy thing and according to https://docs.snowflake.com/en/user-guide/admin-account-identifier you can specify a region as a part of account parameter. Specifying account parameter with the region is also considered legacy, but with this approach it will be easier to convert only your account identifier to the new preferred way of specifying account identifier.
provider "snowflake" {
# before
region = "<cloud_region_id>"
# after
account = "<account_locator>.<cloud_region_id>"
}
provider "snowflake" {
# before
private_key_path = "<filepath>"
# after
private_key = file("<filepath>")
}
provider "snowflake" {
# before
session_params = {}
# after
params = {}
}
Before the change authenticator
parameter did not have to be set for private key authentication and was deduced by the provider. The change is a result of the introduced configuration alignment with an underlying gosnowflake driver. The authentication type is required there, and it defaults to user+password one. From this version, set authenticator
to JWT
explicitly.