You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi folks, due to high NAT gateway cost, we have to use copy load when reading from snowflake, but currently copy unload doesn't support micro second level precision(only at mills). I can work on the PR to add it. But wondering if you have any concerns about this.
The text was updated successfully, but these errors were encountered:
@sfc-gh-mrui thanks for your reply, the problem is with the simpleDateTime(https://github.com/snowflakedb/spark-snowflake/blob/master/src/main/scala/net/snowflake/spark/snowflake/Conversions.scala#L66) used for parse timestamp during copy unload. The format only support millisecond precision and would give wrong timestamp if the timestamp carries micro seconds. For instance, for string 2023-03-01 07:54:56.191173 it would consider it carries 191173 milliseconds so it will add
191000 / 1000 / 60 = 3 mins 11s and put 173 microseconds to milliseconds filed:
2023-03-01 07:58:07.173000.
Hi folks, due to high NAT gateway cost, we have to use copy load when reading from snowflake, but currently copy unload doesn't support micro second level precision(only at mills). I can work on the PR to add it. But wondering if you have any concerns about this.
The text was updated successfully, but these errors were encountered: